00:00:00.000 Started by upstream project "autotest-per-patch" build number 126234 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.038 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.049 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.063 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.063 > git config core.sparsecheckout # timeout=10 00:00:05.075 > git read-tree -mu HEAD # timeout=10 00:00:05.093 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.115 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.116 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.214 [Pipeline] Start of Pipeline 00:00:05.226 [Pipeline] library 00:00:05.228 Loading library shm_lib@master 00:00:05.228 Library shm_lib@master is cached. Copying from home. 00:00:05.242 [Pipeline] node 00:00:05.249 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.250 [Pipeline] { 00:00:05.261 [Pipeline] catchError 00:00:05.263 [Pipeline] { 00:00:05.273 [Pipeline] wrap 00:00:05.281 [Pipeline] { 00:00:05.289 [Pipeline] stage 00:00:05.291 [Pipeline] { (Prologue) 00:00:05.307 [Pipeline] echo 00:00:05.308 Node: VM-host-SM17 00:00:05.314 [Pipeline] cleanWs 00:00:05.322 [WS-CLEANUP] Deleting project workspace... 00:00:05.322 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.329 [WS-CLEANUP] done 00:00:05.553 [Pipeline] setCustomBuildProperty 00:00:05.636 [Pipeline] httpRequest 00:00:05.665 [Pipeline] echo 00:00:05.666 Sorcerer 10.211.164.101 is alive 00:00:05.673 [Pipeline] httpRequest 00:00:05.677 HttpMethod: GET 00:00:05.677 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.677 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.700 Response Code: HTTP/1.1 200 OK 00:00:05.700 Success: Status code 200 is in the accepted range: 200,404 00:00:05.701 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:16.339 [Pipeline] sh 00:00:16.617 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:16.634 [Pipeline] httpRequest 00:00:16.662 [Pipeline] echo 00:00:16.664 Sorcerer 10.211.164.101 is alive 00:00:16.672 [Pipeline] httpRequest 00:00:16.676 HttpMethod: GET 00:00:16.677 URL: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:16.677 Sending request to url: http://10.211.164.101/packages/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:00:16.690 Response Code: HTTP/1.1 200 OK 00:00:16.690 Success: Status code 200 is in the accepted range: 200,404 00:00:16.691 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:01:06.113 [Pipeline] sh 00:01:06.393 + tar --no-same-owner -xf spdk_cdc37ee83b9008feb075db6e5f474e1ec08c5b9a.tar.gz 00:01:09.717 [Pipeline] sh 00:01:09.996 + git -C spdk log --oneline -n5 00:01:09.996 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:01:09.996 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:01:09.996 3269bc4bc env: add spdk_env_opts_init_ext() 00:01:09.996 d9917142f env: pack and assert size for spdk_env_opts 00:01:09.996 1bd83e221 sock: add spdk_sock_get_numa_socket_id 00:01:10.015 [Pipeline] writeFile 00:01:10.056 [Pipeline] sh 00:01:10.334 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.347 [Pipeline] sh 00:01:10.625 + cat autorun-spdk.conf 00:01:10.625 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.625 SPDK_TEST_NVMF=1 00:01:10.625 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.625 SPDK_TEST_URING=1 00:01:10.625 SPDK_TEST_USDT=1 00:01:10.625 SPDK_RUN_UBSAN=1 00:01:10.625 NET_TYPE=virt 00:01:10.625 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.632 RUN_NIGHTLY=0 00:01:10.635 [Pipeline] } 00:01:10.656 [Pipeline] // stage 00:01:10.671 [Pipeline] stage 00:01:10.673 [Pipeline] { (Run VM) 00:01:10.685 [Pipeline] sh 00:01:10.961 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.961 + echo 'Start stage prepare_nvme.sh' 00:01:10.961 Start stage prepare_nvme.sh 00:01:10.961 + [[ -n 4 ]] 00:01:10.961 + disk_prefix=ex4 00:01:10.961 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:10.961 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:10.961 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:10.961 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.961 ++ SPDK_TEST_NVMF=1 00:01:10.961 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.961 ++ SPDK_TEST_URING=1 00:01:10.961 ++ SPDK_TEST_USDT=1 00:01:10.961 ++ SPDK_RUN_UBSAN=1 00:01:10.961 ++ NET_TYPE=virt 00:01:10.961 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.961 ++ RUN_NIGHTLY=0 00:01:10.961 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:10.961 + nvme_files=() 00:01:10.961 + declare -A nvme_files 00:01:10.961 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.961 + nvme_files['nvme.img']=5G 00:01:10.961 + nvme_files['nvme-cmb.img']=5G 00:01:10.961 + nvme_files['nvme-multi0.img']=4G 00:01:10.961 + nvme_files['nvme-multi1.img']=4G 00:01:10.961 + nvme_files['nvme-multi2.img']=4G 00:01:10.961 + nvme_files['nvme-openstack.img']=8G 00:01:10.961 + nvme_files['nvme-zns.img']=5G 00:01:10.961 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.961 + (( SPDK_TEST_FTL == 1 )) 00:01:10.961 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.961 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.961 + for nvme in "${!nvme_files[@]}" 00:01:10.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:10.961 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.961 + for nvme in "${!nvme_files[@]}" 00:01:10.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:10.961 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.961 + for nvme in "${!nvme_files[@]}" 00:01:10.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:10.961 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.961 + for nvme in "${!nvme_files[@]}" 00:01:10.962 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:11.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.528 + for nvme in "${!nvme_files[@]}" 00:01:11.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:11.787 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.787 + for nvme in "${!nvme_files[@]}" 00:01:11.787 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:11.787 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.787 + for nvme in "${!nvme_files[@]}" 00:01:11.787 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:12.354 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.354 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:12.354 + echo 'End stage prepare_nvme.sh' 00:01:12.354 End stage prepare_nvme.sh 00:01:12.366 [Pipeline] sh 00:01:12.715 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.715 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:12.715 00:01:12.715 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:12.715 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:12.715 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:12.715 HELP=0 00:01:12.715 DRY_RUN=0 00:01:12.715 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:12.715 NVME_DISKS_TYPE=nvme,nvme, 00:01:12.715 NVME_AUTO_CREATE=0 00:01:12.715 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:12.715 NVME_CMB=,, 00:01:12.715 NVME_PMR=,, 00:01:12.715 NVME_ZNS=,, 00:01:12.715 NVME_MS=,, 00:01:12.715 NVME_FDP=,, 00:01:12.715 SPDK_VAGRANT_DISTRO=fedora38 00:01:12.715 SPDK_VAGRANT_VMCPU=10 00:01:12.715 SPDK_VAGRANT_VMRAM=12288 00:01:12.715 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.715 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.715 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.715 SPDK_OPENSTACK_NETWORK=0 00:01:12.715 VAGRANT_PACKAGE_BOX=0 00:01:12.715 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.715 FORCE_DISTRO=true 00:01:12.715 VAGRANT_BOX_VERSION= 00:01:12.715 EXTRA_VAGRANTFILES= 00:01:12.715 NIC_MODEL=e1000 00:01:12.715 00:01:12.715 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:12.715 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:16.000 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.257 ==> default: Creating image (snapshot of base box volume). 00:01:16.257 ==> default: Creating domain with the following settings... 00:01:16.257 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721069503_127f70fd814d7e23a7b3 00:01:16.257 ==> default: -- Domain type: kvm 00:01:16.257 ==> default: -- Cpus: 10 00:01:16.257 ==> default: -- Feature: acpi 00:01:16.257 ==> default: -- Feature: apic 00:01:16.257 ==> default: -- Feature: pae 00:01:16.257 ==> default: -- Memory: 12288M 00:01:16.257 ==> default: -- Memory Backing: hugepages: 00:01:16.257 ==> default: -- Management MAC: 00:01:16.257 ==> default: -- Loader: 00:01:16.257 ==> default: -- Nvram: 00:01:16.257 ==> default: -- Base box: spdk/fedora38 00:01:16.257 ==> default: -- Storage pool: default 00:01:16.257 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721069503_127f70fd814d7e23a7b3.img (20G) 00:01:16.257 ==> default: -- Volume Cache: default 00:01:16.257 ==> default: -- Kernel: 00:01:16.257 ==> default: -- Initrd: 00:01:16.257 ==> default: -- Graphics Type: vnc 00:01:16.257 ==> default: -- Graphics Port: -1 00:01:16.257 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.257 ==> default: -- Graphics Password: Not defined 00:01:16.257 ==> default: -- Video Type: cirrus 00:01:16.257 ==> default: -- Video VRAM: 9216 00:01:16.257 ==> default: -- Sound Type: 00:01:16.257 ==> default: -- Keymap: en-us 00:01:16.257 ==> default: -- TPM Path: 00:01:16.257 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.257 ==> default: -- Command line args: 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:16.257 ==> default: -> value=-drive, 00:01:16.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:16.257 ==> default: -> value=-drive, 00:01:16.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.257 ==> default: -> value=-drive, 00:01:16.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.257 ==> default: -> value=-drive, 00:01:16.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:16.257 ==> default: -> value=-device, 00:01:16.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.515 ==> default: Creating shared folders metadata... 00:01:16.515 ==> default: Starting domain. 00:01:18.000 ==> default: Waiting for domain to get an IP address... 00:01:36.152 ==> default: Waiting for SSH to become available... 00:01:36.152 ==> default: Configuring and enabling network interfaces... 00:01:38.685 default: SSH address: 192.168.121.213:22 00:01:38.685 default: SSH username: vagrant 00:01:38.685 default: SSH auth method: private key 00:01:40.588 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.700 ==> default: Mounting SSHFS shared folder... 00:01:49.636 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:49.636 ==> default: Checking Mount.. 00:01:51.011 ==> default: Folder Successfully Mounted! 00:01:51.011 ==> default: Running provisioner: file... 00:01:51.943 default: ~/.gitconfig => .gitconfig 00:01:52.200 00:01:52.200 SUCCESS! 00:01:52.200 00:01:52.200 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:52.200 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:52.201 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:52.201 00:01:52.209 [Pipeline] } 00:01:52.222 [Pipeline] // stage 00:01:52.229 [Pipeline] dir 00:01:52.229 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:52.230 [Pipeline] { 00:01:52.240 [Pipeline] catchError 00:01:52.241 [Pipeline] { 00:01:52.252 [Pipeline] sh 00:01:52.526 + vagrant ssh-config --host vagrant 00:01:52.526 + sed -ne /^Host/,$p 00:01:52.526 + tee ssh_conf 00:01:55.808 Host vagrant 00:01:55.808 HostName 192.168.121.213 00:01:55.808 User vagrant 00:01:55.808 Port 22 00:01:55.808 UserKnownHostsFile /dev/null 00:01:55.808 StrictHostKeyChecking no 00:01:55.808 PasswordAuthentication no 00:01:55.808 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:55.808 IdentitiesOnly yes 00:01:55.808 LogLevel FATAL 00:01:55.808 ForwardAgent yes 00:01:55.808 ForwardX11 yes 00:01:55.809 00:01:55.821 [Pipeline] withEnv 00:01:55.824 [Pipeline] { 00:01:55.868 [Pipeline] sh 00:01:56.146 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:56.146 source /etc/os-release 00:01:56.146 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.146 # Minimal, systemd-like check. 00:01:56.146 if [[ -e /.dockerenv ]]; then 00:01:56.146 # Clear garbage from the node's name: 00:01:56.146 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.146 # $HOSTNAME is the actual container id 00:01:56.146 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.146 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.146 # We can assume this is a mount from a host where container is running, 00:01:56.146 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.146 container="$(< /etc/hostname) ($agent)" 00:01:56.146 else 00:01:56.146 # Fallback 00:01:56.146 container=$agent 00:01:56.146 fi 00:01:56.146 fi 00:01:56.146 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.146 00:01:56.413 [Pipeline] } 00:01:56.428 [Pipeline] // withEnv 00:01:56.434 [Pipeline] setCustomBuildProperty 00:01:56.446 [Pipeline] stage 00:01:56.448 [Pipeline] { (Tests) 00:01:56.462 [Pipeline] sh 00:01:56.738 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:56.750 [Pipeline] sh 00:01:57.023 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:57.295 [Pipeline] timeout 00:01:57.296 Timeout set to expire in 30 min 00:01:57.298 [Pipeline] { 00:01:57.311 [Pipeline] sh 00:01:57.583 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:58.151 HEAD is now at cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:01:58.165 [Pipeline] sh 00:01:58.474 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:58.495 [Pipeline] sh 00:01:58.773 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:58.790 [Pipeline] sh 00:01:59.068 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:59.326 ++ readlink -f spdk_repo 00:01:59.326 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:59.326 + [[ -n /home/vagrant/spdk_repo ]] 00:01:59.326 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:59.326 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:59.326 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:59.326 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:59.326 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:59.326 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:59.326 + cd /home/vagrant/spdk_repo 00:01:59.326 + source /etc/os-release 00:01:59.326 ++ NAME='Fedora Linux' 00:01:59.326 ++ VERSION='38 (Cloud Edition)' 00:01:59.326 ++ ID=fedora 00:01:59.327 ++ VERSION_ID=38 00:01:59.327 ++ VERSION_CODENAME= 00:01:59.327 ++ PLATFORM_ID=platform:f38 00:01:59.327 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:59.327 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:59.327 ++ LOGO=fedora-logo-icon 00:01:59.327 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:59.327 ++ HOME_URL=https://fedoraproject.org/ 00:01:59.327 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:59.327 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:59.327 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:59.327 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:59.327 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:59.327 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:59.327 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:59.327 ++ SUPPORT_END=2024-05-14 00:01:59.327 ++ VARIANT='Cloud Edition' 00:01:59.327 ++ VARIANT_ID=cloud 00:01:59.327 + uname -a 00:01:59.327 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:59.327 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:59.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:59.585 Hugepages 00:01:59.585 node hugesize free / total 00:01:59.585 node0 1048576kB 0 / 0 00:01:59.842 node0 2048kB 0 / 0 00:01:59.842 00:01:59.842 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:59.842 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:59.842 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:59.842 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:59.842 + rm -f /tmp/spdk-ld-path 00:01:59.842 + source autorun-spdk.conf 00:01:59.842 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.842 ++ SPDK_TEST_NVMF=1 00:01:59.842 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.842 ++ SPDK_TEST_URING=1 00:01:59.842 ++ SPDK_TEST_USDT=1 00:01:59.842 ++ SPDK_RUN_UBSAN=1 00:01:59.842 ++ NET_TYPE=virt 00:01:59.842 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.842 ++ RUN_NIGHTLY=0 00:01:59.842 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:59.842 + [[ -n '' ]] 00:01:59.842 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:59.842 + for M in /var/spdk/build-*-manifest.txt 00:01:59.842 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:59.842 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.842 + for M in /var/spdk/build-*-manifest.txt 00:01:59.842 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:59.842 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.842 ++ uname 00:01:59.842 + [[ Linux == \L\i\n\u\x ]] 00:01:59.842 + sudo dmesg -T 00:01:59.842 + sudo dmesg --clear 00:01:59.842 + dmesg_pid=5106 00:01:59.842 + [[ Fedora Linux == FreeBSD ]] 00:01:59.842 + sudo dmesg -Tw 00:01:59.843 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.843 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.843 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:59.843 + [[ -x /usr/src/fio-static/fio ]] 00:01:59.843 + export FIO_BIN=/usr/src/fio-static/fio 00:01:59.843 + FIO_BIN=/usr/src/fio-static/fio 00:01:59.843 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:59.843 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:59.843 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:59.843 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.843 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.843 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:59.843 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.843 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.843 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:59.843 Test configuration: 00:01:59.843 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.843 SPDK_TEST_NVMF=1 00:01:59.843 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.843 SPDK_TEST_URING=1 00:01:59.843 SPDK_TEST_USDT=1 00:01:59.843 SPDK_RUN_UBSAN=1 00:01:59.843 NET_TYPE=virt 00:01:59.843 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.101 RUN_NIGHTLY=0 18:52:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:00.101 18:52:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.101 18:52:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.101 18:52:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.101 18:52:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.101 18:52:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.101 18:52:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.101 18:52:27 -- paths/export.sh@5 -- $ export PATH 00:02:00.101 18:52:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.101 18:52:27 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:00.101 18:52:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:00.101 18:52:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069547.XXXXXX 00:02:00.101 18:52:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069547.wSlYsk 00:02:00.101 18:52:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:00.101 18:52:27 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:00.101 18:52:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:00.101 18:52:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:00.101 18:52:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.101 18:52:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:00.101 18:52:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:00.101 18:52:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.101 18:52:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:00.101 18:52:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:00.101 18:52:27 -- pm/common@17 -- $ local monitor 00:02:00.101 18:52:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.101 18:52:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.101 18:52:27 -- pm/common@25 -- $ sleep 1 00:02:00.101 18:52:27 -- pm/common@21 -- $ date +%s 00:02:00.101 18:52:27 -- pm/common@21 -- $ date +%s 00:02:00.101 18:52:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721069547 00:02:00.101 18:52:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721069547 00:02:00.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721069547_collect-vmstat.pm.log 00:02:00.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721069547_collect-cpu-load.pm.log 00:02:01.036 18:52:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:01.036 18:52:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.036 18:52:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.036 18:52:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:01.036 18:52:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.036 Mon Jul 15 06:52:28 PM UTC 2024 00:02:01.036 18:52:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.036 v24.09-pre-226-gcdc37ee83 00:02:01.036 18:52:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:01.036 18:52:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.036 18:52:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.036 18:52:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:01.036 18:52:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:01.036 18:52:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.036 ************************************ 00:02:01.036 START TEST ubsan 00:02:01.036 ************************************ 00:02:01.036 using ubsan 00:02:01.036 18:52:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:01.036 00:02:01.036 real 0m0.000s 00:02:01.036 user 0m0.000s 00:02:01.036 sys 0m0.000s 00:02:01.036 18:52:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:01.036 ************************************ 00:02:01.036 END TEST ubsan 00:02:01.036 18:52:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.036 ************************************ 00:02:01.036 18:52:28 -- common/autotest_common.sh@1142 -- $ return 0 00:02:01.036 18:52:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:01.036 18:52:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:01.036 18:52:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:01.036 18:52:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:01.295 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:01.295 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.553 Using 'verbs' RDMA provider 00:02:17.380 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.584 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.584 Creating mk/config.mk...done. 00:02:29.584 Creating mk/cc.flags.mk...done. 00:02:29.585 Type 'make' to build. 00:02:29.585 18:52:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:29.585 18:52:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:29.585 18:52:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.585 18:52:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.585 ************************************ 00:02:29.585 START TEST make 00:02:29.585 ************************************ 00:02:29.585 18:52:55 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:29.585 make[1]: Nothing to be done for 'all'. 00:02:39.571 The Meson build system 00:02:39.571 Version: 1.3.1 00:02:39.571 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:39.571 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.571 Build type: native build 00:02:39.571 Program cat found: YES (/usr/bin/cat) 00:02:39.571 Project name: DPDK 00:02:39.571 Project version: 24.03.0 00:02:39.571 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:39.571 C linker for the host machine: cc ld.bfd 2.39-16 00:02:39.571 Host machine cpu family: x86_64 00:02:39.571 Host machine cpu: x86_64 00:02:39.571 Message: ## Building in Developer Mode ## 00:02:39.571 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.571 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.571 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.571 Program python3 found: YES (/usr/bin/python3) 00:02:39.571 Program cat found: YES (/usr/bin/cat) 00:02:39.571 Compiler for C supports arguments -march=native: YES 00:02:39.571 Checking for size of "void *" : 8 00:02:39.571 Checking for size of "void *" : 8 (cached) 00:02:39.571 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:39.571 Library m found: YES 00:02:39.571 Library numa found: YES 00:02:39.571 Has header "numaif.h" : YES 00:02:39.571 Library fdt found: NO 00:02:39.571 Library execinfo found: NO 00:02:39.571 Has header "execinfo.h" : YES 00:02:39.571 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:39.571 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.571 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.571 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.571 Run-time dependency openssl found: YES 3.0.9 00:02:39.571 Run-time dependency libpcap found: YES 1.10.4 00:02:39.571 Has header "pcap.h" with dependency libpcap: YES 00:02:39.571 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.571 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.571 Compiler for C supports arguments -Wformat: YES 00:02:39.571 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.571 Compiler for C supports arguments -Wformat-security: NO 00:02:39.571 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.571 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.571 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.571 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.571 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.571 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.571 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.571 Compiler for C supports arguments -Wundef: YES 00:02:39.571 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.571 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.571 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.571 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.571 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.571 Program objdump found: YES (/usr/bin/objdump) 00:02:39.571 Compiler for C supports arguments -mavx512f: YES 00:02:39.571 Checking if "AVX512 checking" compiles: YES 00:02:39.571 Fetching value of define "__SSE4_2__" : 1 00:02:39.571 Fetching value of define "__AES__" : 1 00:02:39.571 Fetching value of define "__AVX__" : 1 00:02:39.571 Fetching value of define "__AVX2__" : 1 00:02:39.571 Fetching value of define "__AVX512BW__" : (undefined) 00:02:39.571 Fetching value of define "__AVX512CD__" : (undefined) 00:02:39.571 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:39.571 Fetching value of define "__AVX512F__" : (undefined) 00:02:39.571 Fetching value of define "__AVX512VL__" : (undefined) 00:02:39.571 Fetching value of define "__PCLMUL__" : 1 00:02:39.571 Fetching value of define "__RDRND__" : 1 00:02:39.571 Fetching value of define "__RDSEED__" : 1 00:02:39.571 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:39.571 Fetching value of define "__znver1__" : (undefined) 00:02:39.571 Fetching value of define "__znver2__" : (undefined) 00:02:39.571 Fetching value of define "__znver3__" : (undefined) 00:02:39.571 Fetching value of define "__znver4__" : (undefined) 00:02:39.571 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.571 Message: lib/log: Defining dependency "log" 00:02:39.571 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.571 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.571 Checking for function "getentropy" : NO 00:02:39.571 Message: lib/eal: Defining dependency "eal" 00:02:39.571 Message: lib/ring: Defining dependency "ring" 00:02:39.571 Message: lib/rcu: Defining dependency "rcu" 00:02:39.571 Message: lib/mempool: Defining dependency "mempool" 00:02:39.571 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.571 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.571 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.571 Compiler for C supports arguments -mpclmul: YES 00:02:39.571 Compiler for C supports arguments -maes: YES 00:02:39.571 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.571 Compiler for C supports arguments -mavx512bw: YES 00:02:39.571 Compiler for C supports arguments -mavx512dq: YES 00:02:39.571 Compiler for C supports arguments -mavx512vl: YES 00:02:39.571 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.571 Compiler for C supports arguments -mavx2: YES 00:02:39.571 Compiler for C supports arguments -mavx: YES 00:02:39.571 Message: lib/net: Defining dependency "net" 00:02:39.571 Message: lib/meter: Defining dependency "meter" 00:02:39.571 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.571 Message: lib/pci: Defining dependency "pci" 00:02:39.571 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.571 Message: lib/hash: Defining dependency "hash" 00:02:39.571 Message: lib/timer: Defining dependency "timer" 00:02:39.571 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.571 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.571 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.571 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.571 Message: lib/power: Defining dependency "power" 00:02:39.571 Message: lib/reorder: Defining dependency "reorder" 00:02:39.571 Message: lib/security: Defining dependency "security" 00:02:39.571 Has header "linux/userfaultfd.h" : YES 00:02:39.571 Has header "linux/vduse.h" : YES 00:02:39.571 Message: lib/vhost: Defining dependency "vhost" 00:02:39.571 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.571 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.571 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.571 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.571 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.571 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.571 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.571 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.571 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.571 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.571 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.571 Configuring doxy-api-html.conf using configuration 00:02:39.571 Configuring doxy-api-man.conf using configuration 00:02:39.571 Program mandb found: YES (/usr/bin/mandb) 00:02:39.571 Program sphinx-build found: NO 00:02:39.571 Configuring rte_build_config.h using configuration 00:02:39.571 Message: 00:02:39.571 ================= 00:02:39.571 Applications Enabled 00:02:39.571 ================= 00:02:39.571 00:02:39.571 apps: 00:02:39.571 00:02:39.571 00:02:39.571 Message: 00:02:39.571 ================= 00:02:39.571 Libraries Enabled 00:02:39.571 ================= 00:02:39.571 00:02:39.571 libs: 00:02:39.571 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.571 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.571 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.571 00:02:39.571 Message: 00:02:39.571 =============== 00:02:39.571 Drivers Enabled 00:02:39.571 =============== 00:02:39.571 00:02:39.571 common: 00:02:39.571 00:02:39.571 bus: 00:02:39.571 pci, vdev, 00:02:39.571 mempool: 00:02:39.571 ring, 00:02:39.571 dma: 00:02:39.571 00:02:39.571 net: 00:02:39.571 00:02:39.571 crypto: 00:02:39.571 00:02:39.571 compress: 00:02:39.571 00:02:39.571 vdpa: 00:02:39.571 00:02:39.571 00:02:39.572 Message: 00:02:39.572 ================= 00:02:39.572 Content Skipped 00:02:39.572 ================= 00:02:39.572 00:02:39.572 apps: 00:02:39.572 dumpcap: explicitly disabled via build config 00:02:39.572 graph: explicitly disabled via build config 00:02:39.572 pdump: explicitly disabled via build config 00:02:39.572 proc-info: explicitly disabled via build config 00:02:39.572 test-acl: explicitly disabled via build config 00:02:39.572 test-bbdev: explicitly disabled via build config 00:02:39.572 test-cmdline: explicitly disabled via build config 00:02:39.572 test-compress-perf: explicitly disabled via build config 00:02:39.572 test-crypto-perf: explicitly disabled via build config 00:02:39.572 test-dma-perf: explicitly disabled via build config 00:02:39.572 test-eventdev: explicitly disabled via build config 00:02:39.572 test-fib: explicitly disabled via build config 00:02:39.572 test-flow-perf: explicitly disabled via build config 00:02:39.572 test-gpudev: explicitly disabled via build config 00:02:39.572 test-mldev: explicitly disabled via build config 00:02:39.572 test-pipeline: explicitly disabled via build config 00:02:39.572 test-pmd: explicitly disabled via build config 00:02:39.572 test-regex: explicitly disabled via build config 00:02:39.572 test-sad: explicitly disabled via build config 00:02:39.572 test-security-perf: explicitly disabled via build config 00:02:39.572 00:02:39.572 libs: 00:02:39.572 argparse: explicitly disabled via build config 00:02:39.572 metrics: explicitly disabled via build config 00:02:39.572 acl: explicitly disabled via build config 00:02:39.572 bbdev: explicitly disabled via build config 00:02:39.572 bitratestats: explicitly disabled via build config 00:02:39.572 bpf: explicitly disabled via build config 00:02:39.572 cfgfile: explicitly disabled via build config 00:02:39.572 distributor: explicitly disabled via build config 00:02:39.572 efd: explicitly disabled via build config 00:02:39.572 eventdev: explicitly disabled via build config 00:02:39.572 dispatcher: explicitly disabled via build config 00:02:39.572 gpudev: explicitly disabled via build config 00:02:39.572 gro: explicitly disabled via build config 00:02:39.572 gso: explicitly disabled via build config 00:02:39.572 ip_frag: explicitly disabled via build config 00:02:39.572 jobstats: explicitly disabled via build config 00:02:39.572 latencystats: explicitly disabled via build config 00:02:39.572 lpm: explicitly disabled via build config 00:02:39.572 member: explicitly disabled via build config 00:02:39.572 pcapng: explicitly disabled via build config 00:02:39.572 rawdev: explicitly disabled via build config 00:02:39.572 regexdev: explicitly disabled via build config 00:02:39.572 mldev: explicitly disabled via build config 00:02:39.572 rib: explicitly disabled via build config 00:02:39.572 sched: explicitly disabled via build config 00:02:39.572 stack: explicitly disabled via build config 00:02:39.572 ipsec: explicitly disabled via build config 00:02:39.572 pdcp: explicitly disabled via build config 00:02:39.572 fib: explicitly disabled via build config 00:02:39.572 port: explicitly disabled via build config 00:02:39.572 pdump: explicitly disabled via build config 00:02:39.572 table: explicitly disabled via build config 00:02:39.572 pipeline: explicitly disabled via build config 00:02:39.572 graph: explicitly disabled via build config 00:02:39.572 node: explicitly disabled via build config 00:02:39.572 00:02:39.572 drivers: 00:02:39.572 common/cpt: not in enabled drivers build config 00:02:39.572 common/dpaax: not in enabled drivers build config 00:02:39.572 common/iavf: not in enabled drivers build config 00:02:39.572 common/idpf: not in enabled drivers build config 00:02:39.572 common/ionic: not in enabled drivers build config 00:02:39.572 common/mvep: not in enabled drivers build config 00:02:39.572 common/octeontx: not in enabled drivers build config 00:02:39.572 bus/auxiliary: not in enabled drivers build config 00:02:39.572 bus/cdx: not in enabled drivers build config 00:02:39.572 bus/dpaa: not in enabled drivers build config 00:02:39.572 bus/fslmc: not in enabled drivers build config 00:02:39.572 bus/ifpga: not in enabled drivers build config 00:02:39.572 bus/platform: not in enabled drivers build config 00:02:39.572 bus/uacce: not in enabled drivers build config 00:02:39.572 bus/vmbus: not in enabled drivers build config 00:02:39.572 common/cnxk: not in enabled drivers build config 00:02:39.572 common/mlx5: not in enabled drivers build config 00:02:39.572 common/nfp: not in enabled drivers build config 00:02:39.572 common/nitrox: not in enabled drivers build config 00:02:39.572 common/qat: not in enabled drivers build config 00:02:39.572 common/sfc_efx: not in enabled drivers build config 00:02:39.572 mempool/bucket: not in enabled drivers build config 00:02:39.572 mempool/cnxk: not in enabled drivers build config 00:02:39.572 mempool/dpaa: not in enabled drivers build config 00:02:39.572 mempool/dpaa2: not in enabled drivers build config 00:02:39.572 mempool/octeontx: not in enabled drivers build config 00:02:39.572 mempool/stack: not in enabled drivers build config 00:02:39.572 dma/cnxk: not in enabled drivers build config 00:02:39.572 dma/dpaa: not in enabled drivers build config 00:02:39.572 dma/dpaa2: not in enabled drivers build config 00:02:39.572 dma/hisilicon: not in enabled drivers build config 00:02:39.572 dma/idxd: not in enabled drivers build config 00:02:39.572 dma/ioat: not in enabled drivers build config 00:02:39.572 dma/skeleton: not in enabled drivers build config 00:02:39.572 net/af_packet: not in enabled drivers build config 00:02:39.572 net/af_xdp: not in enabled drivers build config 00:02:39.572 net/ark: not in enabled drivers build config 00:02:39.572 net/atlantic: not in enabled drivers build config 00:02:39.572 net/avp: not in enabled drivers build config 00:02:39.572 net/axgbe: not in enabled drivers build config 00:02:39.572 net/bnx2x: not in enabled drivers build config 00:02:39.572 net/bnxt: not in enabled drivers build config 00:02:39.572 net/bonding: not in enabled drivers build config 00:02:39.572 net/cnxk: not in enabled drivers build config 00:02:39.572 net/cpfl: not in enabled drivers build config 00:02:39.572 net/cxgbe: not in enabled drivers build config 00:02:39.572 net/dpaa: not in enabled drivers build config 00:02:39.572 net/dpaa2: not in enabled drivers build config 00:02:39.572 net/e1000: not in enabled drivers build config 00:02:39.572 net/ena: not in enabled drivers build config 00:02:39.572 net/enetc: not in enabled drivers build config 00:02:39.572 net/enetfec: not in enabled drivers build config 00:02:39.572 net/enic: not in enabled drivers build config 00:02:39.572 net/failsafe: not in enabled drivers build config 00:02:39.572 net/fm10k: not in enabled drivers build config 00:02:39.572 net/gve: not in enabled drivers build config 00:02:39.572 net/hinic: not in enabled drivers build config 00:02:39.572 net/hns3: not in enabled drivers build config 00:02:39.572 net/i40e: not in enabled drivers build config 00:02:39.572 net/iavf: not in enabled drivers build config 00:02:39.572 net/ice: not in enabled drivers build config 00:02:39.572 net/idpf: not in enabled drivers build config 00:02:39.572 net/igc: not in enabled drivers build config 00:02:39.572 net/ionic: not in enabled drivers build config 00:02:39.572 net/ipn3ke: not in enabled drivers build config 00:02:39.572 net/ixgbe: not in enabled drivers build config 00:02:39.572 net/mana: not in enabled drivers build config 00:02:39.572 net/memif: not in enabled drivers build config 00:02:39.572 net/mlx4: not in enabled drivers build config 00:02:39.572 net/mlx5: not in enabled drivers build config 00:02:39.572 net/mvneta: not in enabled drivers build config 00:02:39.572 net/mvpp2: not in enabled drivers build config 00:02:39.572 net/netvsc: not in enabled drivers build config 00:02:39.572 net/nfb: not in enabled drivers build config 00:02:39.572 net/nfp: not in enabled drivers build config 00:02:39.572 net/ngbe: not in enabled drivers build config 00:02:39.572 net/null: not in enabled drivers build config 00:02:39.572 net/octeontx: not in enabled drivers build config 00:02:39.572 net/octeon_ep: not in enabled drivers build config 00:02:39.572 net/pcap: not in enabled drivers build config 00:02:39.572 net/pfe: not in enabled drivers build config 00:02:39.572 net/qede: not in enabled drivers build config 00:02:39.572 net/ring: not in enabled drivers build config 00:02:39.572 net/sfc: not in enabled drivers build config 00:02:39.572 net/softnic: not in enabled drivers build config 00:02:39.572 net/tap: not in enabled drivers build config 00:02:39.572 net/thunderx: not in enabled drivers build config 00:02:39.572 net/txgbe: not in enabled drivers build config 00:02:39.572 net/vdev_netvsc: not in enabled drivers build config 00:02:39.572 net/vhost: not in enabled drivers build config 00:02:39.572 net/virtio: not in enabled drivers build config 00:02:39.572 net/vmxnet3: not in enabled drivers build config 00:02:39.572 raw/*: missing internal dependency, "rawdev" 00:02:39.572 crypto/armv8: not in enabled drivers build config 00:02:39.572 crypto/bcmfs: not in enabled drivers build config 00:02:39.572 crypto/caam_jr: not in enabled drivers build config 00:02:39.572 crypto/ccp: not in enabled drivers build config 00:02:39.572 crypto/cnxk: not in enabled drivers build config 00:02:39.572 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.572 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.572 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.572 crypto/mlx5: not in enabled drivers build config 00:02:39.572 crypto/mvsam: not in enabled drivers build config 00:02:39.572 crypto/nitrox: not in enabled drivers build config 00:02:39.572 crypto/null: not in enabled drivers build config 00:02:39.572 crypto/octeontx: not in enabled drivers build config 00:02:39.572 crypto/openssl: not in enabled drivers build config 00:02:39.572 crypto/scheduler: not in enabled drivers build config 00:02:39.572 crypto/uadk: not in enabled drivers build config 00:02:39.572 crypto/virtio: not in enabled drivers build config 00:02:39.572 compress/isal: not in enabled drivers build config 00:02:39.572 compress/mlx5: not in enabled drivers build config 00:02:39.572 compress/nitrox: not in enabled drivers build config 00:02:39.572 compress/octeontx: not in enabled drivers build config 00:02:39.572 compress/zlib: not in enabled drivers build config 00:02:39.572 regex/*: missing internal dependency, "regexdev" 00:02:39.572 ml/*: missing internal dependency, "mldev" 00:02:39.572 vdpa/ifc: not in enabled drivers build config 00:02:39.572 vdpa/mlx5: not in enabled drivers build config 00:02:39.572 vdpa/nfp: not in enabled drivers build config 00:02:39.572 vdpa/sfc: not in enabled drivers build config 00:02:39.572 event/*: missing internal dependency, "eventdev" 00:02:39.572 baseband/*: missing internal dependency, "bbdev" 00:02:39.572 gpu/*: missing internal dependency, "gpudev" 00:02:39.572 00:02:39.572 00:02:39.573 Build targets in project: 85 00:02:39.573 00:02:39.573 DPDK 24.03.0 00:02:39.573 00:02:39.573 User defined options 00:02:39.573 buildtype : debug 00:02:39.573 default_library : shared 00:02:39.573 libdir : lib 00:02:39.573 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.573 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.573 c_link_args : 00:02:39.573 cpu_instruction_set: native 00:02:39.573 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:39.573 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:39.573 enable_docs : false 00:02:39.573 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:39.573 enable_kmods : false 00:02:39.573 max_lcores : 128 00:02:39.573 tests : false 00:02:39.573 00:02:39.573 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.573 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:39.573 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.573 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.573 [3/268] Linking static target lib/librte_log.a 00:02:39.573 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.573 [5/268] Linking static target lib/librte_kvargs.a 00:02:39.573 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.139 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.139 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.397 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.397 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.397 [11/268] Linking static target lib/librte_telemetry.a 00:02:40.397 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.397 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.397 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.655 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.655 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.655 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.655 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.655 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.655 [20/268] Linking target lib/librte_log.so.24.1 00:02:40.913 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:41.197 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:41.197 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.197 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.197 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.197 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:41.455 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.455 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.455 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.455 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:41.455 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.455 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.712 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.712 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.712 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:41.712 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.712 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.971 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.971 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.229 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.229 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.229 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.229 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.229 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.487 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.487 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.487 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.745 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.745 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.003 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.003 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.003 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.003 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.261 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.261 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.519 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.519 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.519 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.777 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.777 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.777 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.777 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.777 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.036 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.036 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.294 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.294 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.552 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.552 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.552 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.552 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.552 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.810 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.810 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.810 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.810 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.069 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.069 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.069 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.327 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.585 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.585 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.585 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.586 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.586 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.844 [86/268] Linking static target lib/librte_ring.a 00:02:45.844 [87/268] Linking static target lib/librte_eal.a 00:02:45.844 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.102 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.102 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.102 [91/268] Linking static target lib/librte_rcu.a 00:02:46.102 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.359 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.359 [94/268] Linking static target lib/librte_mempool.a 00:02:46.359 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.617 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.617 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.617 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:46.617 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.617 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:46.617 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.875 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.875 [103/268] Linking static target lib/librte_mbuf.a 00:02:47.158 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:47.418 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.418 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.418 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:47.418 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.418 [109/268] Linking static target lib/librte_meter.a 00:02:47.677 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.677 [111/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.677 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.677 [113/268] Linking static target lib/librte_net.a 00:02:47.936 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.936 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.194 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.194 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.194 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.453 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.453 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.712 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.971 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.971 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.971 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:48.971 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.971 [126/268] Linking static target lib/librte_pci.a 00:02:48.971 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.971 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.230 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.230 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.230 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.230 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.488 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.488 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.488 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.488 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.488 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.488 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.488 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.488 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.488 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.488 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.746 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.747 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.747 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.005 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.005 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.005 [148/268] Linking static target lib/librte_cmdline.a 00:02:50.005 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.264 [150/268] Linking static target lib/librte_ethdev.a 00:02:50.264 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.264 [152/268] Linking static target lib/librte_timer.a 00:02:50.264 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.523 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:50.523 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.523 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.523 [157/268] Linking static target lib/librte_hash.a 00:02:50.523 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.782 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.782 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.041 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.041 [162/268] Linking static target lib/librte_compressdev.a 00:02:51.041 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.299 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.299 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.299 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.558 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.558 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.558 [169/268] Linking static target lib/librte_dmadev.a 00:02:51.558 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.558 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.816 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.816 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.816 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.816 [175/268] Linking static target lib/librte_cryptodev.a 00:02:52.074 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.074 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.074 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.333 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.333 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.591 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.591 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.591 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.591 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.849 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.849 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.849 [187/268] Linking static target lib/librte_power.a 00:02:52.849 [188/268] Linking static target lib/librte_reorder.a 00:02:53.108 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.108 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.108 [191/268] Linking static target lib/librte_security.a 00:02:53.366 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.366 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.366 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.625 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.883 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.883 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.141 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.141 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.141 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.400 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.400 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.400 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.400 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.658 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.658 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.658 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.916 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.916 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.916 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.916 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.916 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.916 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.175 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.175 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.175 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:55.175 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.175 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.175 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.175 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:55.175 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.175 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.433 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.433 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.433 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.433 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.433 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:55.691 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.256 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.256 [230/268] Linking static target lib/librte_vhost.a 00:02:57.192 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.192 [232/268] Linking target lib/librte_eal.so.24.1 00:02:57.451 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:57.451 [234/268] Linking target lib/librte_meter.so.24.1 00:02:57.451 [235/268] Linking target lib/librte_ring.so.24.1 00:02:57.451 [236/268] Linking target lib/librte_pci.so.24.1 00:02:57.451 [237/268] Linking target lib/librte_timer.so.24.1 00:02:57.451 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:57.451 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:57.451 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:57.451 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:57.451 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:57.451 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:57.451 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:57.710 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:57.710 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:57.710 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:57.710 [248/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.710 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:57.710 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:57.969 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:57.969 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:57.969 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:57.969 [254/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.969 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:57.969 [256/268] Linking target lib/librte_net.so.24.1 00:02:57.969 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:57.969 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:58.228 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:58.228 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:58.228 [261/268] Linking target lib/librte_hash.so.24.1 00:02:58.228 [262/268] Linking target lib/librte_security.so.24.1 00:02:58.228 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:58.228 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:58.488 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:58.488 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:58.488 [267/268] Linking target lib/librte_power.so.24.1 00:02:58.488 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:58.488 INFO: autodetecting backend as ninja 00:02:58.488 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:59.864 CC lib/log/log_flags.o 00:02:59.864 CC lib/log/log.o 00:02:59.865 CC lib/ut/ut.o 00:02:59.865 CC lib/log/log_deprecated.o 00:02:59.865 CC lib/ut_mock/mock.o 00:02:59.865 LIB libspdk_log.a 00:02:59.865 LIB libspdk_ut.a 00:02:59.865 LIB libspdk_ut_mock.a 00:03:00.121 SO libspdk_log.so.7.0 00:03:00.121 SO libspdk_ut_mock.so.6.0 00:03:00.121 SO libspdk_ut.so.2.0 00:03:00.121 SYMLINK libspdk_ut.so 00:03:00.121 SYMLINK libspdk_ut_mock.so 00:03:00.121 SYMLINK libspdk_log.so 00:03:00.379 CXX lib/trace_parser/trace.o 00:03:00.379 CC lib/ioat/ioat.o 00:03:00.379 CC lib/dma/dma.o 00:03:00.379 CC lib/util/base64.o 00:03:00.379 CC lib/util/bit_array.o 00:03:00.379 CC lib/util/cpuset.o 00:03:00.379 CC lib/util/crc16.o 00:03:00.379 CC lib/util/crc32.o 00:03:00.379 CC lib/util/crc32c.o 00:03:00.379 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.379 CC lib/util/crc32_ieee.o 00:03:00.379 CC lib/util/crc64.o 00:03:00.379 CC lib/util/dif.o 00:03:00.379 CC lib/vfio_user/host/vfio_user.o 00:03:00.637 LIB libspdk_dma.a 00:03:00.637 CC lib/util/fd.o 00:03:00.637 CC lib/util/fd_group.o 00:03:00.637 SO libspdk_dma.so.4.0 00:03:00.637 CC lib/util/file.o 00:03:00.637 CC lib/util/hexlify.o 00:03:00.637 LIB libspdk_ioat.a 00:03:00.637 SYMLINK libspdk_dma.so 00:03:00.637 CC lib/util/iov.o 00:03:00.637 SO libspdk_ioat.so.7.0 00:03:00.637 SYMLINK libspdk_ioat.so 00:03:00.637 CC lib/util/math.o 00:03:00.637 CC lib/util/net.o 00:03:00.637 CC lib/util/pipe.o 00:03:00.637 LIB libspdk_vfio_user.a 00:03:00.895 SO libspdk_vfio_user.so.5.0 00:03:00.895 CC lib/util/strerror_tls.o 00:03:00.895 CC lib/util/string.o 00:03:00.895 CC lib/util/uuid.o 00:03:00.895 SYMLINK libspdk_vfio_user.so 00:03:00.895 CC lib/util/xor.o 00:03:00.895 CC lib/util/zipf.o 00:03:01.152 LIB libspdk_util.a 00:03:01.152 SO libspdk_util.so.9.1 00:03:01.411 SYMLINK libspdk_util.so 00:03:01.411 LIB libspdk_trace_parser.a 00:03:01.411 SO libspdk_trace_parser.so.5.0 00:03:01.411 CC lib/rdma_provider/common.o 00:03:01.411 CC lib/vmd/vmd.o 00:03:01.411 CC lib/idxd/idxd.o 00:03:01.411 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.411 CC lib/vmd/led.o 00:03:01.411 CC lib/conf/conf.o 00:03:01.411 CC lib/json/json_parse.o 00:03:01.411 CC lib/rdma_utils/rdma_utils.o 00:03:01.411 CC lib/env_dpdk/env.o 00:03:01.669 SYMLINK libspdk_trace_parser.so 00:03:01.669 CC lib/json/json_util.o 00:03:01.669 CC lib/json/json_write.o 00:03:01.669 CC lib/idxd/idxd_user.o 00:03:01.669 LIB libspdk_rdma_provider.a 00:03:01.669 LIB libspdk_conf.a 00:03:01.669 SO libspdk_rdma_provider.so.6.0 00:03:01.669 CC lib/env_dpdk/memory.o 00:03:01.669 SO libspdk_conf.so.6.0 00:03:01.929 CC lib/env_dpdk/pci.o 00:03:01.929 LIB libspdk_rdma_utils.a 00:03:01.929 SO libspdk_rdma_utils.so.1.0 00:03:01.929 SYMLINK libspdk_rdma_provider.so 00:03:01.929 CC lib/env_dpdk/init.o 00:03:01.929 SYMLINK libspdk_conf.so 00:03:01.929 CC lib/idxd/idxd_kernel.o 00:03:01.929 SYMLINK libspdk_rdma_utils.so 00:03:01.929 CC lib/env_dpdk/threads.o 00:03:01.929 CC lib/env_dpdk/pci_ioat.o 00:03:01.929 LIB libspdk_json.a 00:03:01.929 CC lib/env_dpdk/pci_virtio.o 00:03:01.929 SO libspdk_json.so.6.0 00:03:01.929 CC lib/env_dpdk/pci_vmd.o 00:03:02.185 CC lib/env_dpdk/pci_idxd.o 00:03:02.185 LIB libspdk_idxd.a 00:03:02.185 SYMLINK libspdk_json.so 00:03:02.185 CC lib/env_dpdk/pci_event.o 00:03:02.185 SO libspdk_idxd.so.12.0 00:03:02.185 LIB libspdk_vmd.a 00:03:02.185 CC lib/env_dpdk/sigbus_handler.o 00:03:02.185 CC lib/env_dpdk/pci_dpdk.o 00:03:02.185 SO libspdk_vmd.so.6.0 00:03:02.185 SYMLINK libspdk_idxd.so 00:03:02.185 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.185 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.185 SYMLINK libspdk_vmd.so 00:03:02.442 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.442 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.442 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.442 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.699 LIB libspdk_jsonrpc.a 00:03:02.699 SO libspdk_jsonrpc.so.6.0 00:03:02.956 SYMLINK libspdk_jsonrpc.so 00:03:02.956 LIB libspdk_env_dpdk.a 00:03:02.956 SO libspdk_env_dpdk.so.15.0 00:03:03.213 CC lib/rpc/rpc.o 00:03:03.213 SYMLINK libspdk_env_dpdk.so 00:03:03.213 LIB libspdk_rpc.a 00:03:03.471 SO libspdk_rpc.so.6.0 00:03:03.471 SYMLINK libspdk_rpc.so 00:03:03.729 CC lib/keyring/keyring_rpc.o 00:03:03.729 CC lib/keyring/keyring.o 00:03:03.729 CC lib/trace/trace.o 00:03:03.729 CC lib/notify/notify.o 00:03:03.729 CC lib/trace/trace_flags.o 00:03:03.729 CC lib/notify/notify_rpc.o 00:03:03.729 CC lib/trace/trace_rpc.o 00:03:03.729 LIB libspdk_notify.a 00:03:03.988 SO libspdk_notify.so.6.0 00:03:03.988 LIB libspdk_keyring.a 00:03:03.988 LIB libspdk_trace.a 00:03:03.988 SYMLINK libspdk_notify.so 00:03:03.988 SO libspdk_keyring.so.1.0 00:03:03.988 SO libspdk_trace.so.10.0 00:03:03.988 SYMLINK libspdk_keyring.so 00:03:03.988 SYMLINK libspdk_trace.so 00:03:04.246 CC lib/thread/thread.o 00:03:04.246 CC lib/thread/iobuf.o 00:03:04.246 CC lib/sock/sock.o 00:03:04.246 CC lib/sock/sock_rpc.o 00:03:04.817 LIB libspdk_sock.a 00:03:04.817 SO libspdk_sock.so.10.0 00:03:04.817 SYMLINK libspdk_sock.so 00:03:05.100 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.100 CC lib/nvme/nvme_ctrlr.o 00:03:05.100 CC lib/nvme/nvme_fabric.o 00:03:05.100 CC lib/nvme/nvme_ns_cmd.o 00:03:05.100 CC lib/nvme/nvme_ns.o 00:03:05.100 CC lib/nvme/nvme_pcie_common.o 00:03:05.100 CC lib/nvme/nvme_pcie.o 00:03:05.100 CC lib/nvme/nvme_qpair.o 00:03:05.100 CC lib/nvme/nvme.o 00:03:06.050 LIB libspdk_thread.a 00:03:06.050 SO libspdk_thread.so.10.1 00:03:06.050 CC lib/nvme/nvme_quirks.o 00:03:06.050 SYMLINK libspdk_thread.so 00:03:06.050 CC lib/nvme/nvme_transport.o 00:03:06.050 CC lib/nvme/nvme_discovery.o 00:03:06.050 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:06.309 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:06.309 CC lib/nvme/nvme_tcp.o 00:03:06.309 CC lib/blob/blobstore.o 00:03:06.309 CC lib/accel/accel.o 00:03:06.309 CC lib/accel/accel_rpc.o 00:03:06.566 CC lib/accel/accel_sw.o 00:03:06.566 CC lib/blob/request.o 00:03:06.825 CC lib/blob/zeroes.o 00:03:06.825 CC lib/nvme/nvme_opal.o 00:03:06.825 CC lib/blob/blob_bs_dev.o 00:03:06.825 CC lib/nvme/nvme_io_msg.o 00:03:06.825 CC lib/nvme/nvme_poll_group.o 00:03:06.825 CC lib/nvme/nvme_zns.o 00:03:06.825 CC lib/nvme/nvme_stubs.o 00:03:07.083 CC lib/nvme/nvme_auth.o 00:03:07.342 LIB libspdk_accel.a 00:03:07.342 CC lib/nvme/nvme_cuse.o 00:03:07.342 SO libspdk_accel.so.15.1 00:03:07.342 SYMLINK libspdk_accel.so 00:03:07.342 CC lib/nvme/nvme_rdma.o 00:03:07.601 CC lib/init/json_config.o 00:03:07.601 CC lib/init/subsystem.o 00:03:07.601 CC lib/init/subsystem_rpc.o 00:03:07.601 CC lib/init/rpc.o 00:03:07.601 CC lib/virtio/virtio.o 00:03:07.601 CC lib/bdev/bdev.o 00:03:07.601 CC lib/virtio/virtio_vhost_user.o 00:03:07.601 CC lib/virtio/virtio_vfio_user.o 00:03:07.860 CC lib/virtio/virtio_pci.o 00:03:07.860 LIB libspdk_init.a 00:03:07.860 SO libspdk_init.so.5.0 00:03:07.860 CC lib/bdev/bdev_rpc.o 00:03:08.119 SYMLINK libspdk_init.so 00:03:08.119 CC lib/bdev/bdev_zone.o 00:03:08.119 CC lib/bdev/part.o 00:03:08.119 CC lib/bdev/scsi_nvme.o 00:03:08.119 LIB libspdk_virtio.a 00:03:08.119 SO libspdk_virtio.so.7.0 00:03:08.119 CC lib/event/app.o 00:03:08.119 CC lib/event/reactor.o 00:03:08.119 CC lib/event/log_rpc.o 00:03:08.119 SYMLINK libspdk_virtio.so 00:03:08.119 CC lib/event/app_rpc.o 00:03:08.119 CC lib/event/scheduler_static.o 00:03:08.686 LIB libspdk_event.a 00:03:08.686 SO libspdk_event.so.14.0 00:03:08.686 SYMLINK libspdk_event.so 00:03:08.686 LIB libspdk_nvme.a 00:03:08.944 SO libspdk_nvme.so.13.1 00:03:09.203 LIB libspdk_blob.a 00:03:09.203 SO libspdk_blob.so.11.0 00:03:09.203 SYMLINK libspdk_nvme.so 00:03:09.462 SYMLINK libspdk_blob.so 00:03:09.721 CC lib/lvol/lvol.o 00:03:09.721 CC lib/blobfs/blobfs.o 00:03:09.721 CC lib/blobfs/tree.o 00:03:10.336 LIB libspdk_bdev.a 00:03:10.336 SO libspdk_bdev.so.15.1 00:03:10.336 LIB libspdk_blobfs.a 00:03:10.614 SO libspdk_blobfs.so.10.0 00:03:10.614 SYMLINK libspdk_bdev.so 00:03:10.614 SYMLINK libspdk_blobfs.so 00:03:10.614 LIB libspdk_lvol.a 00:03:10.614 SO libspdk_lvol.so.10.0 00:03:10.614 SYMLINK libspdk_lvol.so 00:03:10.614 CC lib/ftl/ftl_core.o 00:03:10.614 CC lib/ftl/ftl_init.o 00:03:10.614 CC lib/ftl/ftl_layout.o 00:03:10.614 CC lib/ftl/ftl_debug.o 00:03:10.614 CC lib/ftl/ftl_io.o 00:03:10.614 CC lib/ftl/ftl_sb.o 00:03:10.614 CC lib/scsi/dev.o 00:03:10.614 CC lib/nvmf/ctrlr.o 00:03:10.614 CC lib/nbd/nbd.o 00:03:10.614 CC lib/ublk/ublk.o 00:03:10.872 CC lib/ftl/ftl_l2p.o 00:03:10.872 CC lib/scsi/lun.o 00:03:10.872 CC lib/scsi/port.o 00:03:10.872 CC lib/ftl/ftl_l2p_flat.o 00:03:10.872 CC lib/scsi/scsi.o 00:03:11.131 CC lib/scsi/scsi_bdev.o 00:03:11.131 CC lib/nbd/nbd_rpc.o 00:03:11.131 CC lib/ftl/ftl_nv_cache.o 00:03:11.131 CC lib/ftl/ftl_band.o 00:03:11.131 CC lib/ftl/ftl_band_ops.o 00:03:11.131 CC lib/ftl/ftl_writer.o 00:03:11.131 CC lib/ftl/ftl_rq.o 00:03:11.131 LIB libspdk_nbd.a 00:03:11.131 CC lib/nvmf/ctrlr_discovery.o 00:03:11.390 SO libspdk_nbd.so.7.0 00:03:11.390 CC lib/ublk/ublk_rpc.o 00:03:11.390 SYMLINK libspdk_nbd.so 00:03:11.390 CC lib/ftl/ftl_reloc.o 00:03:11.390 CC lib/ftl/ftl_l2p_cache.o 00:03:11.390 CC lib/ftl/ftl_p2l.o 00:03:11.390 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.390 CC lib/scsi/scsi_pr.o 00:03:11.390 LIB libspdk_ublk.a 00:03:11.390 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.649 SO libspdk_ublk.so.3.0 00:03:11.649 SYMLINK libspdk_ublk.so 00:03:11.649 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.649 CC lib/nvmf/ctrlr_bdev.o 00:03:11.649 CC lib/nvmf/subsystem.o 00:03:11.649 CC lib/nvmf/nvmf.o 00:03:11.649 CC lib/nvmf/nvmf_rpc.o 00:03:11.649 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.908 CC lib/nvmf/transport.o 00:03:11.908 CC lib/scsi/scsi_rpc.o 00:03:11.908 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.908 CC lib/scsi/task.o 00:03:11.908 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.167 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.167 LIB libspdk_scsi.a 00:03:12.167 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.167 SO libspdk_scsi.so.9.0 00:03:12.425 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.425 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.425 SYMLINK libspdk_scsi.so 00:03:12.425 CC lib/nvmf/tcp.o 00:03:12.425 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.425 CC lib/nvmf/stubs.o 00:03:12.425 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.425 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.684 CC lib/ftl/utils/ftl_conf.o 00:03:12.684 CC lib/iscsi/conn.o 00:03:12.684 CC lib/iscsi/init_grp.o 00:03:12.684 CC lib/ftl/utils/ftl_md.o 00:03:12.684 CC lib/ftl/utils/ftl_mempool.o 00:03:12.684 CC lib/iscsi/iscsi.o 00:03:12.684 CC lib/iscsi/md5.o 00:03:12.943 CC lib/iscsi/param.o 00:03:12.943 CC lib/iscsi/portal_grp.o 00:03:12.943 CC lib/nvmf/mdns_server.o 00:03:12.943 CC lib/nvmf/rdma.o 00:03:12.943 CC lib/nvmf/auth.o 00:03:13.202 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.202 CC lib/iscsi/tgt_node.o 00:03:13.202 CC lib/vhost/vhost.o 00:03:13.202 CC lib/ftl/utils/ftl_property.o 00:03:13.202 CC lib/iscsi/iscsi_subsystem.o 00:03:13.202 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.202 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.461 CC lib/vhost/vhost_rpc.o 00:03:13.461 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.461 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.461 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.720 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.720 CC lib/iscsi/iscsi_rpc.o 00:03:13.720 CC lib/iscsi/task.o 00:03:13.720 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.720 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.720 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.979 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.979 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.979 CC lib/ftl/base/ftl_base_dev.o 00:03:13.979 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.979 CC lib/ftl/ftl_trace.o 00:03:13.979 CC lib/vhost/vhost_scsi.o 00:03:13.979 CC lib/vhost/vhost_blk.o 00:03:13.979 LIB libspdk_iscsi.a 00:03:13.979 CC lib/vhost/rte_vhost_user.o 00:03:14.238 SO libspdk_iscsi.so.8.0 00:03:14.238 LIB libspdk_ftl.a 00:03:14.238 SYMLINK libspdk_iscsi.so 00:03:14.497 SO libspdk_ftl.so.9.0 00:03:14.756 LIB libspdk_nvmf.a 00:03:15.015 SYMLINK libspdk_ftl.so 00:03:15.015 SO libspdk_nvmf.so.19.0 00:03:15.015 LIB libspdk_vhost.a 00:03:15.015 SYMLINK libspdk_nvmf.so 00:03:15.273 SO libspdk_vhost.so.8.0 00:03:15.273 SYMLINK libspdk_vhost.so 00:03:15.532 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.790 CC module/accel/error/accel_error.o 00:03:15.790 CC module/accel/ioat/accel_ioat.o 00:03:15.790 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.790 CC module/sock/uring/uring.o 00:03:15.790 CC module/sock/posix/posix.o 00:03:15.790 CC module/blob/bdev/blob_bdev.o 00:03:15.790 CC module/keyring/linux/keyring.o 00:03:15.790 CC module/keyring/file/keyring.o 00:03:15.790 CC module/accel/dsa/accel_dsa.o 00:03:15.790 LIB libspdk_env_dpdk_rpc.a 00:03:15.790 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.790 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.790 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.790 CC module/keyring/linux/keyring_rpc.o 00:03:16.066 CC module/keyring/file/keyring_rpc.o 00:03:16.066 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.066 CC module/accel/error/accel_error_rpc.o 00:03:16.066 LIB libspdk_scheduler_dynamic.a 00:03:16.066 SO libspdk_scheduler_dynamic.so.4.0 00:03:16.066 LIB libspdk_blob_bdev.a 00:03:16.066 SYMLINK libspdk_scheduler_dynamic.so 00:03:16.066 SO libspdk_blob_bdev.so.11.0 00:03:16.066 LIB libspdk_accel_dsa.a 00:03:16.066 LIB libspdk_keyring_linux.a 00:03:16.066 LIB libspdk_accel_ioat.a 00:03:16.066 LIB libspdk_keyring_file.a 00:03:16.066 SO libspdk_accel_dsa.so.5.0 00:03:16.066 LIB libspdk_accel_error.a 00:03:16.066 SO libspdk_keyring_linux.so.1.0 00:03:16.066 SO libspdk_accel_ioat.so.6.0 00:03:16.066 SO libspdk_keyring_file.so.1.0 00:03:16.066 SYMLINK libspdk_blob_bdev.so 00:03:16.066 SO libspdk_accel_error.so.2.0 00:03:16.066 SYMLINK libspdk_accel_dsa.so 00:03:16.066 SYMLINK libspdk_keyring_linux.so 00:03:16.066 SYMLINK libspdk_accel_ioat.so 00:03:16.066 SYMLINK libspdk_keyring_file.so 00:03:16.066 SYMLINK libspdk_accel_error.so 00:03:16.324 CC module/accel/iaa/accel_iaa.o 00:03:16.324 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.324 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.324 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.324 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.324 LIB libspdk_accel_iaa.a 00:03:16.324 LIB libspdk_sock_uring.a 00:03:16.324 CC module/bdev/error/vbdev_error.o 00:03:16.324 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.324 CC module/bdev/gpt/gpt.o 00:03:16.324 CC module/bdev/delay/vbdev_delay.o 00:03:16.324 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.583 SO libspdk_accel_iaa.so.3.0 00:03:16.583 SO libspdk_sock_uring.so.5.0 00:03:16.583 LIB libspdk_sock_posix.a 00:03:16.583 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.583 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.583 LIB libspdk_scheduler_gscheduler.a 00:03:16.583 SO libspdk_sock_posix.so.6.0 00:03:16.583 SYMLINK libspdk_accel_iaa.so 00:03:16.583 SYMLINK libspdk_sock_uring.so 00:03:16.583 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.583 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.583 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.583 SYMLINK libspdk_sock_posix.so 00:03:16.583 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.583 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.583 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.583 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.583 LIB libspdk_blobfs_bdev.a 00:03:16.842 CC module/bdev/malloc/bdev_malloc.o 00:03:16.842 SO libspdk_blobfs_bdev.so.6.0 00:03:16.842 SYMLINK libspdk_blobfs_bdev.so 00:03:16.842 LIB libspdk_bdev_delay.a 00:03:16.842 LIB libspdk_bdev_error.a 00:03:16.842 CC module/bdev/null/bdev_null.o 00:03:16.842 SO libspdk_bdev_delay.so.6.0 00:03:16.842 SO libspdk_bdev_error.so.6.0 00:03:16.842 CC module/bdev/nvme/bdev_nvme.o 00:03:16.842 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.842 SYMLINK libspdk_bdev_error.so 00:03:16.842 SYMLINK libspdk_bdev_delay.so 00:03:16.842 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.842 LIB libspdk_bdev_gpt.a 00:03:17.101 SO libspdk_bdev_gpt.so.6.0 00:03:17.101 CC module/bdev/raid/bdev_raid.o 00:03:17.101 SYMLINK libspdk_bdev_gpt.so 00:03:17.101 CC module/bdev/null/bdev_null_rpc.o 00:03:17.101 LIB libspdk_bdev_lvol.a 00:03:17.101 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.101 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.101 CC module/bdev/nvme/nvme_rpc.o 00:03:17.101 CC module/bdev/split/vbdev_split.o 00:03:17.101 SO libspdk_bdev_lvol.so.6.0 00:03:17.101 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.101 SYMLINK libspdk_bdev_lvol.so 00:03:17.101 LIB libspdk_bdev_passthru.a 00:03:17.360 LIB libspdk_bdev_null.a 00:03:17.360 SO libspdk_bdev_passthru.so.6.0 00:03:17.360 LIB libspdk_bdev_malloc.a 00:03:17.360 SO libspdk_bdev_null.so.6.0 00:03:17.360 SO libspdk_bdev_malloc.so.6.0 00:03:17.360 SYMLINK libspdk_bdev_passthru.so 00:03:17.360 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.360 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.360 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.360 CC module/bdev/uring/bdev_uring.o 00:03:17.360 SYMLINK libspdk_bdev_null.so 00:03:17.360 SYMLINK libspdk_bdev_malloc.so 00:03:17.360 CC module/bdev/uring/bdev_uring_rpc.o 00:03:17.618 LIB libspdk_bdev_split.a 00:03:17.618 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.618 CC module/bdev/aio/bdev_aio.o 00:03:17.618 SO libspdk_bdev_split.so.6.0 00:03:17.618 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.618 CC module/bdev/raid/raid0.o 00:03:17.618 SYMLINK libspdk_bdev_split.so 00:03:17.618 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.618 CC module/bdev/raid/raid1.o 00:03:17.618 CC module/bdev/nvme/vbdev_opal.o 00:03:17.618 LIB libspdk_bdev_zone_block.a 00:03:17.618 LIB libspdk_bdev_uring.a 00:03:17.877 CC module/bdev/raid/concat.o 00:03:17.877 SO libspdk_bdev_zone_block.so.6.0 00:03:17.877 SO libspdk_bdev_uring.so.6.0 00:03:17.877 LIB libspdk_bdev_aio.a 00:03:17.877 SYMLINK libspdk_bdev_zone_block.so 00:03:17.877 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.877 SYMLINK libspdk_bdev_uring.so 00:03:17.877 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.877 SO libspdk_bdev_aio.so.6.0 00:03:17.877 SYMLINK libspdk_bdev_aio.so 00:03:17.877 CC module/bdev/ftl/bdev_ftl.o 00:03:17.877 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.877 LIB libspdk_bdev_raid.a 00:03:18.136 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.136 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.136 SO libspdk_bdev_raid.so.6.0 00:03:18.136 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.136 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.136 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.136 SYMLINK libspdk_bdev_raid.so 00:03:18.395 LIB libspdk_bdev_ftl.a 00:03:18.395 SO libspdk_bdev_ftl.so.6.0 00:03:18.395 SYMLINK libspdk_bdev_ftl.so 00:03:18.395 LIB libspdk_bdev_iscsi.a 00:03:18.395 SO libspdk_bdev_iscsi.so.6.0 00:03:18.395 SYMLINK libspdk_bdev_iscsi.so 00:03:18.654 LIB libspdk_bdev_virtio.a 00:03:18.913 SO libspdk_bdev_virtio.so.6.0 00:03:18.913 SYMLINK libspdk_bdev_virtio.so 00:03:19.172 LIB libspdk_bdev_nvme.a 00:03:19.172 SO libspdk_bdev_nvme.so.7.0 00:03:19.430 SYMLINK libspdk_bdev_nvme.so 00:03:19.998 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.998 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.998 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.998 CC module/event/subsystems/sock/sock.o 00:03:19.998 CC module/event/subsystems/keyring/keyring.o 00:03:19.998 CC module/event/subsystems/vmd/vmd.o 00:03:19.998 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.998 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.998 LIB libspdk_event_vhost_blk.a 00:03:19.998 LIB libspdk_event_keyring.a 00:03:19.998 LIB libspdk_event_vmd.a 00:03:19.998 LIB libspdk_event_scheduler.a 00:03:19.998 LIB libspdk_event_sock.a 00:03:19.998 LIB libspdk_event_iobuf.a 00:03:19.998 SO libspdk_event_vhost_blk.so.3.0 00:03:19.998 SO libspdk_event_keyring.so.1.0 00:03:19.998 SO libspdk_event_scheduler.so.4.0 00:03:19.998 SO libspdk_event_sock.so.5.0 00:03:19.998 SO libspdk_event_vmd.so.6.0 00:03:19.998 SO libspdk_event_iobuf.so.3.0 00:03:19.998 SYMLINK libspdk_event_scheduler.so 00:03:19.998 SYMLINK libspdk_event_vhost_blk.so 00:03:19.998 SYMLINK libspdk_event_sock.so 00:03:19.998 SYMLINK libspdk_event_keyring.so 00:03:20.257 SYMLINK libspdk_event_vmd.so 00:03:20.257 SYMLINK libspdk_event_iobuf.so 00:03:20.515 CC module/event/subsystems/accel/accel.o 00:03:20.515 LIB libspdk_event_accel.a 00:03:20.774 SO libspdk_event_accel.so.6.0 00:03:20.774 SYMLINK libspdk_event_accel.so 00:03:21.033 CC module/event/subsystems/bdev/bdev.o 00:03:21.291 LIB libspdk_event_bdev.a 00:03:21.291 SO libspdk_event_bdev.so.6.0 00:03:21.291 SYMLINK libspdk_event_bdev.so 00:03:21.548 CC module/event/subsystems/nbd/nbd.o 00:03:21.548 CC module/event/subsystems/scsi/scsi.o 00:03:21.548 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.548 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.548 CC module/event/subsystems/ublk/ublk.o 00:03:21.806 LIB libspdk_event_nbd.a 00:03:21.806 LIB libspdk_event_ublk.a 00:03:21.806 LIB libspdk_event_scsi.a 00:03:21.806 SO libspdk_event_nbd.so.6.0 00:03:21.806 SO libspdk_event_ublk.so.3.0 00:03:21.806 SO libspdk_event_scsi.so.6.0 00:03:21.806 LIB libspdk_event_nvmf.a 00:03:21.806 SYMLINK libspdk_event_nbd.so 00:03:21.806 SYMLINK libspdk_event_ublk.so 00:03:21.806 SYMLINK libspdk_event_scsi.so 00:03:21.806 SO libspdk_event_nvmf.so.6.0 00:03:22.064 SYMLINK libspdk_event_nvmf.so 00:03:22.064 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.064 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.322 LIB libspdk_event_vhost_scsi.a 00:03:22.322 LIB libspdk_event_iscsi.a 00:03:22.322 SO libspdk_event_vhost_scsi.so.3.0 00:03:22.322 SO libspdk_event_iscsi.so.6.0 00:03:22.322 SYMLINK libspdk_event_vhost_scsi.so 00:03:22.322 SYMLINK libspdk_event_iscsi.so 00:03:22.581 SO libspdk.so.6.0 00:03:22.581 SYMLINK libspdk.so 00:03:22.839 CC app/trace_record/trace_record.o 00:03:22.839 CC app/spdk_lspci/spdk_lspci.o 00:03:22.839 CXX app/trace/trace.o 00:03:22.839 CC app/spdk_nvme_perf/perf.o 00:03:22.839 CC app/spdk_nvme_identify/identify.o 00:03:22.839 CC app/nvmf_tgt/nvmf_main.o 00:03:22.839 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.839 CC app/spdk_tgt/spdk_tgt.o 00:03:22.839 CC examples/util/zipf/zipf.o 00:03:22.839 CC test/thread/poller_perf/poller_perf.o 00:03:23.098 LINK spdk_lspci 00:03:23.098 LINK nvmf_tgt 00:03:23.098 LINK iscsi_tgt 00:03:23.098 LINK zipf 00:03:23.098 LINK spdk_trace_record 00:03:23.098 LINK spdk_tgt 00:03:23.098 LINK poller_perf 00:03:23.356 LINK spdk_trace 00:03:23.356 CC examples/ioat/perf/perf.o 00:03:23.356 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.356 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.356 CC app/spdk_top/spdk_top.o 00:03:23.614 CC test/dma/test_dma/test_dma.o 00:03:23.614 CC examples/sock/hello_world/hello_sock.o 00:03:23.614 CC examples/thread/thread/thread_ex.o 00:03:23.614 LINK ioat_perf 00:03:23.614 LINK spdk_nvme_discover 00:03:23.614 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.614 LINK interrupt_tgt 00:03:23.614 LINK spdk_nvme_identify 00:03:23.614 LINK spdk_nvme_perf 00:03:23.872 LINK hello_sock 00:03:23.872 LINK lsvmd 00:03:23.872 LINK thread 00:03:23.872 CC examples/ioat/verify/verify.o 00:03:23.872 LINK test_dma 00:03:23.872 CC examples/idxd/perf/perf.o 00:03:23.872 CC app/spdk_dd/spdk_dd.o 00:03:24.130 CC examples/vmd/led/led.o 00:03:24.130 CC test/app/bdev_svc/bdev_svc.o 00:03:24.130 CC app/fio/nvme/fio_plugin.o 00:03:24.130 CC app/vhost/vhost.o 00:03:24.130 LINK verify 00:03:24.130 LINK led 00:03:24.130 CC examples/nvme/hello_world/hello_world.o 00:03:24.388 LINK idxd_perf 00:03:24.388 LINK vhost 00:03:24.388 LINK bdev_svc 00:03:24.388 CC examples/accel/perf/accel_perf.o 00:03:24.388 LINK spdk_top 00:03:24.388 LINK spdk_dd 00:03:24.388 CC examples/blob/hello_world/hello_blob.o 00:03:24.388 CC examples/blob/cli/blobcli.o 00:03:24.388 LINK hello_world 00:03:24.646 CC app/fio/bdev/fio_plugin.o 00:03:24.646 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.646 CC examples/nvme/reconnect/reconnect.o 00:03:24.646 LINK spdk_nvme 00:03:24.646 TEST_HEADER include/spdk/accel.h 00:03:24.646 TEST_HEADER include/spdk/accel_module.h 00:03:24.646 TEST_HEADER include/spdk/assert.h 00:03:24.646 LINK hello_blob 00:03:24.646 TEST_HEADER include/spdk/barrier.h 00:03:24.646 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.646 TEST_HEADER include/spdk/base64.h 00:03:24.646 TEST_HEADER include/spdk/bdev.h 00:03:24.646 TEST_HEADER include/spdk/bdev_module.h 00:03:24.646 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.646 TEST_HEADER include/spdk/bit_array.h 00:03:24.646 TEST_HEADER include/spdk/bit_pool.h 00:03:24.646 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.646 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.646 CC examples/nvme/arbitration/arbitration.o 00:03:24.646 TEST_HEADER include/spdk/blobfs.h 00:03:24.646 TEST_HEADER include/spdk/blob.h 00:03:24.646 TEST_HEADER include/spdk/conf.h 00:03:24.646 TEST_HEADER include/spdk/config.h 00:03:24.646 TEST_HEADER include/spdk/cpuset.h 00:03:24.646 TEST_HEADER include/spdk/crc16.h 00:03:24.646 TEST_HEADER include/spdk/crc32.h 00:03:24.646 TEST_HEADER include/spdk/crc64.h 00:03:24.646 TEST_HEADER include/spdk/dif.h 00:03:24.646 TEST_HEADER include/spdk/dma.h 00:03:24.646 TEST_HEADER include/spdk/endian.h 00:03:24.646 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.905 TEST_HEADER include/spdk/env.h 00:03:24.905 TEST_HEADER include/spdk/event.h 00:03:24.905 TEST_HEADER include/spdk/fd_group.h 00:03:24.905 TEST_HEADER include/spdk/fd.h 00:03:24.905 TEST_HEADER include/spdk/file.h 00:03:24.905 TEST_HEADER include/spdk/ftl.h 00:03:24.905 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.905 TEST_HEADER include/spdk/hexlify.h 00:03:24.905 TEST_HEADER include/spdk/histogram_data.h 00:03:24.905 TEST_HEADER include/spdk/idxd.h 00:03:24.905 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.905 LINK accel_perf 00:03:24.905 TEST_HEADER include/spdk/init.h 00:03:24.905 TEST_HEADER include/spdk/ioat.h 00:03:24.905 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.905 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.905 TEST_HEADER include/spdk/json.h 00:03:24.905 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.905 TEST_HEADER include/spdk/keyring.h 00:03:24.905 TEST_HEADER include/spdk/keyring_module.h 00:03:24.905 CC examples/nvme/hotplug/hotplug.o 00:03:24.905 TEST_HEADER include/spdk/likely.h 00:03:24.905 TEST_HEADER include/spdk/log.h 00:03:24.905 TEST_HEADER include/spdk/lvol.h 00:03:24.905 TEST_HEADER include/spdk/memory.h 00:03:24.905 TEST_HEADER include/spdk/mmio.h 00:03:24.905 TEST_HEADER include/spdk/nbd.h 00:03:24.905 TEST_HEADER include/spdk/net.h 00:03:24.905 TEST_HEADER include/spdk/notify.h 00:03:24.905 TEST_HEADER include/spdk/nvme.h 00:03:24.905 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.905 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.905 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.905 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.905 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.905 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.905 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.905 TEST_HEADER include/spdk/nvmf.h 00:03:24.905 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.905 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.905 TEST_HEADER include/spdk/opal.h 00:03:24.905 TEST_HEADER include/spdk/opal_spec.h 00:03:24.905 TEST_HEADER include/spdk/pci_ids.h 00:03:24.905 TEST_HEADER include/spdk/pipe.h 00:03:24.905 TEST_HEADER include/spdk/queue.h 00:03:24.905 TEST_HEADER include/spdk/reduce.h 00:03:24.905 TEST_HEADER include/spdk/rpc.h 00:03:24.905 TEST_HEADER include/spdk/scheduler.h 00:03:24.905 TEST_HEADER include/spdk/scsi.h 00:03:24.905 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.905 TEST_HEADER include/spdk/sock.h 00:03:24.905 TEST_HEADER include/spdk/stdinc.h 00:03:24.905 TEST_HEADER include/spdk/string.h 00:03:24.905 TEST_HEADER include/spdk/thread.h 00:03:24.905 TEST_HEADER include/spdk/trace.h 00:03:24.905 TEST_HEADER include/spdk/trace_parser.h 00:03:24.905 TEST_HEADER include/spdk/tree.h 00:03:24.905 TEST_HEADER include/spdk/ublk.h 00:03:24.905 TEST_HEADER include/spdk/util.h 00:03:24.905 TEST_HEADER include/spdk/uuid.h 00:03:24.905 TEST_HEADER include/spdk/version.h 00:03:24.905 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:24.905 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:24.905 TEST_HEADER include/spdk/vhost.h 00:03:24.905 TEST_HEADER include/spdk/vmd.h 00:03:24.905 TEST_HEADER include/spdk/xor.h 00:03:24.905 TEST_HEADER include/spdk/zipf.h 00:03:24.905 CXX test/cpp_headers/accel.o 00:03:24.905 LINK blobcli 00:03:24.905 CXX test/cpp_headers/accel_module.o 00:03:24.905 LINK reconnect 00:03:24.905 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.164 LINK spdk_bdev 00:03:25.164 LINK hotplug 00:03:25.164 LINK arbitration 00:03:25.164 LINK nvme_manage 00:03:25.164 LINK nvme_fuzz 00:03:25.164 CXX test/cpp_headers/assert.o 00:03:25.164 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.164 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.164 CC test/app/histogram_perf/histogram_perf.o 00:03:25.164 CXX test/cpp_headers/barrier.o 00:03:25.164 CXX test/cpp_headers/base64.o 00:03:25.422 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.422 CC examples/nvme/abort/abort.o 00:03:25.422 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.422 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:25.422 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.422 LINK histogram_perf 00:03:25.422 CXX test/cpp_headers/bdev.o 00:03:25.422 LINK cmb_copy 00:03:25.422 CXX test/cpp_headers/bdev_module.o 00:03:25.680 LINK hello_bdev 00:03:25.680 LINK pmr_persistence 00:03:25.680 CXX test/cpp_headers/bdev_zone.o 00:03:25.680 CXX test/cpp_headers/bit_array.o 00:03:25.680 CXX test/cpp_headers/bit_pool.o 00:03:25.680 CC test/app/jsoncat/jsoncat.o 00:03:25.680 LINK abort 00:03:25.680 CXX test/cpp_headers/blob_bdev.o 00:03:25.680 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.680 CXX test/cpp_headers/blobfs.o 00:03:25.680 LINK vhost_fuzz 00:03:25.939 LINK jsoncat 00:03:25.939 CXX test/cpp_headers/blob.o 00:03:25.939 CXX test/cpp_headers/conf.o 00:03:25.939 CC test/event/event_perf/event_perf.o 00:03:25.939 CC test/env/vtophys/vtophys.o 00:03:26.201 CC test/app/stub/stub.o 00:03:26.201 CC test/event/reactor/reactor.o 00:03:26.201 CC test/event/reactor_perf/reactor_perf.o 00:03:26.201 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.201 LINK bdevperf 00:03:26.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:26.201 CXX test/cpp_headers/config.o 00:03:26.201 CXX test/cpp_headers/cpuset.o 00:03:26.201 LINK event_perf 00:03:26.201 LINK vtophys 00:03:26.201 LINK reactor 00:03:26.201 LINK reactor_perf 00:03:26.201 LINK stub 00:03:26.201 LINK env_dpdk_post_init 00:03:26.461 CXX test/cpp_headers/crc16.o 00:03:26.461 CC test/event/app_repeat/app_repeat.o 00:03:26.461 CXX test/cpp_headers/crc32.o 00:03:26.461 CC test/event/scheduler/scheduler.o 00:03:26.461 CC test/rpc_client/rpc_client_test.o 00:03:26.461 CC test/env/memory/memory_ut.o 00:03:26.461 CC examples/nvmf/nvmf/nvmf.o 00:03:26.461 CC test/nvme/aer/aer.o 00:03:26.775 LINK iscsi_fuzz 00:03:26.775 LINK app_repeat 00:03:26.775 CXX test/cpp_headers/crc64.o 00:03:26.775 LINK rpc_client_test 00:03:26.775 LINK scheduler 00:03:26.775 LINK mem_callbacks 00:03:26.775 CC test/accel/dif/dif.o 00:03:26.775 CXX test/cpp_headers/dif.o 00:03:26.775 CXX test/cpp_headers/dma.o 00:03:26.775 CXX test/cpp_headers/endian.o 00:03:26.775 LINK aer 00:03:27.033 LINK nvmf 00:03:27.033 CC test/env/pci/pci_ut.o 00:03:27.033 CXX test/cpp_headers/env_dpdk.o 00:03:27.033 CXX test/cpp_headers/env.o 00:03:27.033 CXX test/cpp_headers/event.o 00:03:27.033 CXX test/cpp_headers/fd_group.o 00:03:27.033 CC test/nvme/reset/reset.o 00:03:27.033 CXX test/cpp_headers/fd.o 00:03:27.033 CXX test/cpp_headers/file.o 00:03:27.033 CC test/nvme/sgl/sgl.o 00:03:27.292 CC test/nvme/e2edp/nvme_dp.o 00:03:27.292 LINK dif 00:03:27.292 CXX test/cpp_headers/ftl.o 00:03:27.292 CC test/nvme/overhead/overhead.o 00:03:27.292 CXX test/cpp_headers/gpt_spec.o 00:03:27.292 CC test/nvme/err_injection/err_injection.o 00:03:27.292 LINK pci_ut 00:03:27.292 LINK reset 00:03:27.549 LINK sgl 00:03:27.549 LINK nvme_dp 00:03:27.549 CXX test/cpp_headers/hexlify.o 00:03:27.549 LINK err_injection 00:03:27.549 CXX test/cpp_headers/histogram_data.o 00:03:27.549 LINK overhead 00:03:27.549 CXX test/cpp_headers/idxd.o 00:03:27.549 CC test/blobfs/mkfs/mkfs.o 00:03:27.549 CC test/nvme/startup/startup.o 00:03:27.549 CXX test/cpp_headers/idxd_spec.o 00:03:27.807 LINK memory_ut 00:03:27.807 CC test/nvme/reserve/reserve.o 00:03:27.807 CC test/lvol/esnap/esnap.o 00:03:27.807 CXX test/cpp_headers/init.o 00:03:27.807 CC test/bdev/bdevio/bdevio.o 00:03:27.807 LINK startup 00:03:27.807 LINK mkfs 00:03:27.807 CC test/nvme/connect_stress/connect_stress.o 00:03:27.807 CC test/nvme/simple_copy/simple_copy.o 00:03:28.065 CC test/nvme/boot_partition/boot_partition.o 00:03:28.065 CXX test/cpp_headers/ioat.o 00:03:28.066 CC test/nvme/compliance/nvme_compliance.o 00:03:28.066 LINK reserve 00:03:28.066 LINK connect_stress 00:03:28.066 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.066 LINK simple_copy 00:03:28.066 LINK boot_partition 00:03:28.066 CXX test/cpp_headers/ioat_spec.o 00:03:28.066 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.066 CXX test/cpp_headers/iscsi_spec.o 00:03:28.324 LINK bdevio 00:03:28.324 CXX test/cpp_headers/json.o 00:03:28.324 CC test/nvme/fdp/fdp.o 00:03:28.324 LINK fused_ordering 00:03:28.324 LINK nvme_compliance 00:03:28.324 CXX test/cpp_headers/jsonrpc.o 00:03:28.324 CXX test/cpp_headers/keyring.o 00:03:28.324 LINK doorbell_aers 00:03:28.324 CC test/nvme/cuse/cuse.o 00:03:28.324 CXX test/cpp_headers/keyring_module.o 00:03:28.324 CXX test/cpp_headers/likely.o 00:03:28.582 CXX test/cpp_headers/log.o 00:03:28.582 CXX test/cpp_headers/lvol.o 00:03:28.582 CXX test/cpp_headers/memory.o 00:03:28.582 CXX test/cpp_headers/mmio.o 00:03:28.582 CXX test/cpp_headers/nbd.o 00:03:28.582 CXX test/cpp_headers/net.o 00:03:28.582 CXX test/cpp_headers/notify.o 00:03:28.582 CXX test/cpp_headers/nvme.o 00:03:28.582 LINK fdp 00:03:28.582 CXX test/cpp_headers/nvme_intel.o 00:03:28.582 CXX test/cpp_headers/nvme_ocssd.o 00:03:28.582 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:28.582 CXX test/cpp_headers/nvme_spec.o 00:03:28.841 CXX test/cpp_headers/nvme_zns.o 00:03:28.841 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.841 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.841 CXX test/cpp_headers/nvmf.o 00:03:28.841 CXX test/cpp_headers/nvmf_spec.o 00:03:28.841 CXX test/cpp_headers/nvmf_transport.o 00:03:28.841 CXX test/cpp_headers/opal.o 00:03:28.841 CXX test/cpp_headers/opal_spec.o 00:03:28.841 CXX test/cpp_headers/pci_ids.o 00:03:28.841 CXX test/cpp_headers/pipe.o 00:03:29.100 CXX test/cpp_headers/queue.o 00:03:29.100 CXX test/cpp_headers/reduce.o 00:03:29.100 CXX test/cpp_headers/rpc.o 00:03:29.100 CXX test/cpp_headers/scheduler.o 00:03:29.100 CXX test/cpp_headers/scsi.o 00:03:29.100 CXX test/cpp_headers/scsi_spec.o 00:03:29.100 CXX test/cpp_headers/sock.o 00:03:29.100 CXX test/cpp_headers/stdinc.o 00:03:29.100 CXX test/cpp_headers/string.o 00:03:29.100 CXX test/cpp_headers/thread.o 00:03:29.100 CXX test/cpp_headers/trace.o 00:03:29.100 CXX test/cpp_headers/trace_parser.o 00:03:29.100 CXX test/cpp_headers/tree.o 00:03:29.359 CXX test/cpp_headers/ublk.o 00:03:29.359 CXX test/cpp_headers/util.o 00:03:29.359 CXX test/cpp_headers/uuid.o 00:03:29.359 CXX test/cpp_headers/version.o 00:03:29.359 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.359 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.359 CXX test/cpp_headers/vhost.o 00:03:29.359 CXX test/cpp_headers/vmd.o 00:03:29.359 CXX test/cpp_headers/xor.o 00:03:29.359 CXX test/cpp_headers/zipf.o 00:03:29.927 LINK cuse 00:03:33.214 LINK esnap 00:03:33.214 00:03:33.214 real 1m4.609s 00:03:33.214 user 6m17.005s 00:03:33.214 sys 1m39.939s 00:03:33.214 18:54:00 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:33.214 18:54:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.214 ************************************ 00:03:33.214 END TEST make 00:03:33.214 ************************************ 00:03:33.214 18:54:00 -- common/autotest_common.sh@1142 -- $ return 0 00:03:33.214 18:54:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.214 18:54:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.214 18:54:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.214 18:54:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.214 18:54:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.214 18:54:00 -- pm/common@44 -- $ pid=5141 00:03:33.214 18:54:00 -- pm/common@50 -- $ kill -TERM 5141 00:03:33.214 18:54:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.214 18:54:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.214 18:54:00 -- pm/common@44 -- $ pid=5143 00:03:33.214 18:54:00 -- pm/common@50 -- $ kill -TERM 5143 00:03:33.214 18:54:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.214 18:54:00 -- nvmf/common.sh@7 -- # uname -s 00:03:33.214 18:54:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.214 18:54:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.214 18:54:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.214 18:54:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.214 18:54:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.214 18:54:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.214 18:54:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.214 18:54:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.214 18:54:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.214 18:54:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.214 18:54:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:03:33.214 18:54:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:03:33.214 18:54:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.214 18:54:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.214 18:54:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:33.214 18:54:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.214 18:54:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.214 18:54:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.214 18:54:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.214 18:54:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.214 18:54:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.214 18:54:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.214 18:54:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.214 18:54:00 -- paths/export.sh@5 -- # export PATH 00:03:33.214 18:54:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.214 18:54:00 -- nvmf/common.sh@47 -- # : 0 00:03:33.214 18:54:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:33.214 18:54:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:33.214 18:54:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.214 18:54:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.214 18:54:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.214 18:54:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:33.214 18:54:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:33.214 18:54:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:33.214 18:54:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.214 18:54:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.214 18:54:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.214 18:54:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.214 18:54:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.214 18:54:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.214 18:54:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.214 18:54:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.214 18:54:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.214 18:54:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.214 18:54:00 -- spdk/autotest.sh@48 -- # udevadm_pid=52788 00:03:33.214 18:54:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.214 18:54:00 -- pm/common@17 -- # local monitor 00:03:33.214 18:54:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.214 18:54:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.214 18:54:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.214 18:54:00 -- pm/common@21 -- # date +%s 00:03:33.214 18:54:00 -- pm/common@21 -- # date +%s 00:03:33.214 18:54:00 -- pm/common@25 -- # sleep 1 00:03:33.214 18:54:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721069640 00:03:33.214 18:54:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721069640 00:03:33.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721069640_collect-cpu-load.pm.log 00:03:33.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721069640_collect-vmstat.pm.log 00:03:34.402 18:54:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.402 18:54:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.402 18:54:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:34.402 18:54:01 -- common/autotest_common.sh@10 -- # set +x 00:03:34.402 18:54:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.402 18:54:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:34.402 18:54:01 -- common/autotest_common.sh@10 -- # set +x 00:03:34.402 18:54:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.402 18:54:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.402 18:54:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.402 18:54:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.402 18:54:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.402 18:54:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.402 18:54:01 -- common/autotest_common.sh@1455 -- # uname 00:03:34.402 18:54:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:34.402 18:54:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.402 18:54:01 -- common/autotest_common.sh@1475 -- # uname 00:03:34.402 18:54:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:34.402 18:54:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:34.402 18:54:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:34.402 18:54:01 -- spdk/autotest.sh@72 -- # hash lcov 00:03:34.402 18:54:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:34.402 18:54:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:34.402 --rc lcov_branch_coverage=1 00:03:34.402 --rc lcov_function_coverage=1 00:03:34.402 --rc genhtml_branch_coverage=1 00:03:34.402 --rc genhtml_function_coverage=1 00:03:34.402 --rc genhtml_legend=1 00:03:34.402 --rc geninfo_all_blocks=1 00:03:34.402 ' 00:03:34.402 18:54:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:34.402 --rc lcov_branch_coverage=1 00:03:34.402 --rc lcov_function_coverage=1 00:03:34.402 --rc genhtml_branch_coverage=1 00:03:34.402 --rc genhtml_function_coverage=1 00:03:34.402 --rc genhtml_legend=1 00:03:34.402 --rc geninfo_all_blocks=1 00:03:34.402 ' 00:03:34.402 18:54:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:34.402 --rc lcov_branch_coverage=1 00:03:34.402 --rc lcov_function_coverage=1 00:03:34.402 --rc genhtml_branch_coverage=1 00:03:34.402 --rc genhtml_function_coverage=1 00:03:34.402 --rc genhtml_legend=1 00:03:34.402 --rc geninfo_all_blocks=1 00:03:34.402 --no-external' 00:03:34.402 18:54:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:34.402 --rc lcov_branch_coverage=1 00:03:34.402 --rc lcov_function_coverage=1 00:03:34.402 --rc genhtml_branch_coverage=1 00:03:34.402 --rc genhtml_function_coverage=1 00:03:34.402 --rc genhtml_legend=1 00:03:34.402 --rc geninfo_all_blocks=1 00:03:34.402 --no-external' 00:03:34.402 18:54:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:34.402 lcov: LCOV version 1.14 00:03:34.403 18:54:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.274 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.274 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:01.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:01.509 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:01.510 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:01.510 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:01.511 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:01.511 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:01.511 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:01.511 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:01.511 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:01.511 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:01.511 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:01.511 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:01.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:01.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:01.770 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:01.770 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:01.770 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:01.770 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:01.770 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:05.967 18:54:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:05.967 18:54:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.967 18:54:32 -- common/autotest_common.sh@10 -- # set +x 00:04:05.967 18:54:32 -- spdk/autotest.sh@91 -- # rm -f 00:04:05.967 18:54:32 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.967 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:05.967 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:05.967 18:54:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:05.967 18:54:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:05.968 18:54:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:05.968 18:54:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:05.968 18:54:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.968 18:54:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:05.968 18:54:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:05.968 18:54:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.968 18:54:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:05.968 18:54:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:05.968 18:54:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.968 18:54:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:05.968 18:54:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:05.968 18:54:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.968 18:54:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:05.968 18:54:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:05.968 18:54:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.968 18:54:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.968 18:54:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:05.968 18:54:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.968 18:54:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:05.968 18:54:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:05.968 18:54:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:05.968 18:54:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.968 No valid GPT data, bailing 00:04:05.968 18:54:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.968 18:54:33 -- scripts/common.sh@391 -- # pt= 00:04:05.968 18:54:33 -- scripts/common.sh@392 -- # return 1 00:04:05.968 18:54:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.968 1+0 records in 00:04:05.968 1+0 records out 00:04:05.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402486 s, 261 MB/s 00:04:05.968 18:54:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.968 18:54:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:05.968 18:54:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:05.968 18:54:33 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:05.968 18:54:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:05.968 No valid GPT data, bailing 00:04:05.968 18:54:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:06.227 18:54:33 -- scripts/common.sh@391 -- # pt= 00:04:06.227 18:54:33 -- scripts/common.sh@392 -- # return 1 00:04:06.227 18:54:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:06.227 1+0 records in 00:04:06.227 1+0 records out 00:04:06.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00315901 s, 332 MB/s 00:04:06.227 18:54:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.227 18:54:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:06.227 18:54:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:06.227 18:54:33 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:06.227 18:54:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:06.227 No valid GPT data, bailing 00:04:06.227 18:54:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:06.227 18:54:33 -- scripts/common.sh@391 -- # pt= 00:04:06.227 18:54:33 -- scripts/common.sh@392 -- # return 1 00:04:06.227 18:54:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:06.227 1+0 records in 00:04:06.227 1+0 records out 00:04:06.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460654 s, 228 MB/s 00:04:06.227 18:54:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.227 18:54:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:06.227 18:54:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:06.227 18:54:33 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:06.227 18:54:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:06.227 No valid GPT data, bailing 00:04:06.227 18:54:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:06.227 18:54:33 -- scripts/common.sh@391 -- # pt= 00:04:06.227 18:54:33 -- scripts/common.sh@392 -- # return 1 00:04:06.227 18:54:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:06.227 1+0 records in 00:04:06.227 1+0 records out 00:04:06.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00553105 s, 190 MB/s 00:04:06.227 18:54:33 -- spdk/autotest.sh@118 -- # sync 00:04:06.227 18:54:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:06.227 18:54:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:06.227 18:54:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.127 18:54:35 -- spdk/autotest.sh@124 -- # uname -s 00:04:08.127 18:54:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:08.127 18:54:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.127 18:54:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.127 18:54:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.127 18:54:35 -- common/autotest_common.sh@10 -- # set +x 00:04:08.127 ************************************ 00:04:08.127 START TEST setup.sh 00:04:08.127 ************************************ 00:04:08.127 18:54:35 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:08.385 * Looking for test storage... 00:04:08.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.385 18:54:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:08.385 18:54:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:08.385 18:54:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:08.385 18:54:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.385 18:54:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.385 18:54:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.385 ************************************ 00:04:08.385 START TEST acl 00:04:08.385 ************************************ 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:08.385 * Looking for test storage... 00:04:08.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.385 18:54:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:08.385 18:54:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:08.385 18:54:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.385 18:54:35 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.316 18:54:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:09.316 18:54:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:09.316 18:54:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.316 18:54:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:09.316 18:54:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.316 18:54:36 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.882 Hugepages 00:04:09.882 node hugesize free / total 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.882 00:04:09.882 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.882 18:54:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.882 18:54:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:10.141 18:54:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:10.141 18:54:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.141 18:54:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.141 18:54:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:10.141 ************************************ 00:04:10.141 START TEST denied 00:04:10.141 ************************************ 00:04:10.141 18:54:37 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:10.141 18:54:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:10.141 18:54:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:10.141 18:54:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.141 18:54:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:10.141 18:54:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.075 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.075 18:54:38 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.642 00:04:11.642 real 0m1.460s 00:04:11.642 user 0m0.587s 00:04:11.642 sys 0m0.814s 00:04:11.642 18:54:38 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.642 18:54:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:11.642 ************************************ 00:04:11.642 END TEST denied 00:04:11.642 ************************************ 00:04:11.642 18:54:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:11.642 18:54:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:11.642 18:54:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.642 18:54:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.642 18:54:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:11.642 ************************************ 00:04:11.642 START TEST allowed 00:04:11.642 ************************************ 00:04:11.642 18:54:38 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:11.642 18:54:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:11.642 18:54:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:11.642 18:54:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.642 18:54:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:11.642 18:54:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.579 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.579 18:54:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.146 00:04:13.146 real 0m1.551s 00:04:13.146 user 0m0.696s 00:04:13.146 sys 0m0.847s 00:04:13.146 18:54:40 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.146 18:54:40 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:13.146 ************************************ 00:04:13.146 END TEST allowed 00:04:13.146 ************************************ 00:04:13.146 18:54:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:13.146 00:04:13.146 real 0m4.853s 00:04:13.146 user 0m2.129s 00:04:13.146 sys 0m2.659s 00:04:13.146 18:54:40 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.146 18:54:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.146 ************************************ 00:04:13.146 END TEST acl 00:04:13.146 ************************************ 00:04:13.146 18:54:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.146 18:54:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:13.146 18:54:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.146 18:54:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.146 18:54:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.146 ************************************ 00:04:13.146 START TEST hugepages 00:04:13.146 ************************************ 00:04:13.146 18:54:40 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:13.406 * Looking for test storage... 00:04:13.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6034716 kB' 'MemAvailable: 7416604 kB' 'Buffers: 2436 kB' 'Cached: 1596152 kB' 'SwapCached: 0 kB' 'Active: 435904 kB' 'Inactive: 1267240 kB' 'Active(anon): 115044 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106176 kB' 'Mapped: 48720 kB' 'Shmem: 10488 kB' 'KReclaimable: 61452 kB' 'Slab: 133196 kB' 'SReclaimable: 61452 kB' 'SUnreclaim: 71744 kB' 'KernelStack: 6380 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 336316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.406 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.407 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.408 18:54:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:13.408 18:54:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.408 18:54:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.408 18:54:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.408 ************************************ 00:04:13.408 START TEST default_setup 00:04:13.408 ************************************ 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:13.408 18:54:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:13.409 18:54:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.409 18:54:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.235 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:14.235 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133412 kB' 'MemAvailable: 9515180 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452952 kB' 'Inactive: 1267252 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 49692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6400 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.236 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133412 kB' 'MemAvailable: 9515180 kB' 'Buffers: 2436 kB' 'Cached: 1596140 kB' 'SwapCached: 0 kB' 'Active: 452576 kB' 'Inactive: 1267252 kB' 'Active(anon): 131716 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267252 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122896 kB' 'Mapped: 48780 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133012 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6384 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.237 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.238 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133160 kB' 'MemAvailable: 9514932 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452368 kB' 'Inactive: 1267256 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122924 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 132992 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6368 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.239 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.240 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:14.241 nr_hugepages=1024 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.241 resv_hugepages=0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.241 surplus_hugepages=0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.241 anon_hugepages=0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133160 kB' 'MemAvailable: 9514932 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452648 kB' 'Inactive: 1267256 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 132992 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6368 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.241 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.242 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133160 kB' 'MemUsed: 4108812 kB' 'SwapCached: 0 kB' 'Active: 452636 kB' 'Inactive: 1267256 kB' 'Active(anon): 131776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1598580 kB' 'Mapped: 48592 kB' 'AnonPages: 122936 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 132992 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.243 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.244 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:14.503 node0=1024 expecting 1024 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.503 00:04:14.503 real 0m1.005s 00:04:14.503 user 0m0.462s 00:04:14.503 sys 0m0.514s 00:04:14.503 ************************************ 00:04:14.503 END TEST default_setup 00:04:14.503 ************************************ 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.503 18:54:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:14.503 18:54:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.503 18:54:41 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:14.503 18:54:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.503 18:54:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.503 18:54:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.503 ************************************ 00:04:14.503 START TEST per_node_1G_alloc 00:04:14.503 ************************************ 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.503 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.764 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.764 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9183812 kB' 'MemAvailable: 10565584 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452840 kB' 'Inactive: 1267256 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133028 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71840 kB' 'KernelStack: 6356 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.764 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9183812 kB' 'MemAvailable: 10565584 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452612 kB' 'Inactive: 1267256 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122824 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6336 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.765 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.766 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9183812 kB' 'MemAvailable: 10565584 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1267256 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6352 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.029 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.030 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:15.031 nr_hugepages=512 00:04:15.031 resv_hugepages=0 00:04:15.031 surplus_hugepages=0 00:04:15.031 anon_hugepages=0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9183812 kB' 'MemAvailable: 10565584 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452612 kB' 'Inactive: 1267256 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122856 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6352 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.031 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.032 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9183816 kB' 'MemUsed: 3058156 kB' 'SwapCached: 0 kB' 'Active: 452608 kB' 'Inactive: 1267256 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1598580 kB' 'Mapped: 48632 kB' 'AnonPages: 122856 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.033 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.034 node0=512 expecting 512 00:04:15.034 ************************************ 00:04:15.034 END TEST per_node_1G_alloc 00:04:15.034 ************************************ 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:15.034 00:04:15.034 real 0m0.597s 00:04:15.034 user 0m0.272s 00:04:15.034 sys 0m0.328s 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.034 18:54:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.034 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.034 18:54:42 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:15.034 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.034 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.034 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.034 ************************************ 00:04:15.034 START TEST even_2G_alloc 00:04:15.034 ************************************ 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:15.034 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:15.035 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.035 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.555 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.555 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137732 kB' 'MemAvailable: 9519504 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1267256 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 132996 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 6324 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.555 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137480 kB' 'MemAvailable: 9519252 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452732 kB' 'Inactive: 1267256 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133000 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71812 kB' 'KernelStack: 6352 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.556 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.557 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137480 kB' 'MemAvailable: 9519252 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1267256 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133000 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71812 kB' 'KernelStack: 6368 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.558 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.559 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.560 nr_hugepages=1024 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.560 resv_hugepages=0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.560 surplus_hugepages=0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.560 anon_hugepages=0 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8139316 kB' 'MemAvailable: 9521088 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1267256 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133000 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71812 kB' 'KernelStack: 6352 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.560 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.561 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8139316 kB' 'MemUsed: 4102656 kB' 'SwapCached: 0 kB' 'Active: 452720 kB' 'Inactive: 1267256 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1598580 kB' 'Mapped: 48632 kB' 'AnonPages: 123016 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 132996 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.562 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.563 node0=1024 expecting 1024 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.563 00:04:15.563 real 0m0.552s 00:04:15.563 user 0m0.261s 00:04:15.563 sys 0m0.298s 00:04:15.563 ************************************ 00:04:15.563 END TEST even_2G_alloc 00:04:15.563 ************************************ 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.563 18:54:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.563 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.563 18:54:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:15.563 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.563 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.563 18:54:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.563 ************************************ 00:04:15.563 START TEST odd_alloc 00:04:15.563 ************************************ 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.563 18:54:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.133 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.133 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137244 kB' 'MemAvailable: 9519016 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 453312 kB' 'Inactive: 1267256 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123604 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6448 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.133 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.134 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8136992 kB' 'MemAvailable: 9518764 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452944 kB' 'Inactive: 1267256 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123272 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133004 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71816 kB' 'KernelStack: 6444 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.135 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137672 kB' 'MemAvailable: 9519444 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452680 kB' 'Inactive: 1267256 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133004 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71816 kB' 'KernelStack: 6412 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.136 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.137 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.138 nr_hugepages=1025 00:04:16.138 resv_hugepages=0 00:04:16.138 surplus_hugepages=0 00:04:16.138 anon_hugepages=0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137672 kB' 'MemAvailable: 9519444 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452924 kB' 'Inactive: 1267256 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 132992 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71804 kB' 'KernelStack: 6412 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.138 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.139 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8137672 kB' 'MemUsed: 4104300 kB' 'SwapCached: 0 kB' 'Active: 452816 kB' 'Inactive: 1267256 kB' 'Active(anon): 131956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1598580 kB' 'Mapped: 48536 kB' 'AnonPages: 123136 kB' 'Shmem: 10464 kB' 'KernelStack: 6360 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 132988 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.140 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.141 node0=1025 expecting 1025 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:16.141 00:04:16.141 real 0m0.574s 00:04:16.141 user 0m0.297s 00:04:16.141 sys 0m0.284s 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.141 ************************************ 00:04:16.141 END TEST odd_alloc 00:04:16.141 ************************************ 00:04:16.141 18:54:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.399 18:54:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.399 18:54:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:16.399 18:54:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.399 18:54:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.399 18:54:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.399 ************************************ 00:04:16.399 START TEST custom_alloc 00:04:16.399 ************************************ 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.399 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.659 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.659 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.659 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9191176 kB' 'MemAvailable: 10572948 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 453156 kB' 'Inactive: 1267256 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133012 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6372 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.660 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.661 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9191176 kB' 'MemAvailable: 10572952 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452724 kB' 'Inactive: 1267260 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123020 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133016 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71828 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.662 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.663 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9191780 kB' 'MemAvailable: 10573556 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1267260 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122764 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133008 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71820 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.664 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.665 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.666 nr_hugepages=512 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:16.666 resv_hugepages=0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.666 surplus_hugepages=0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.666 anon_hugepages=0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.666 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9192148 kB' 'MemAvailable: 10573924 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452416 kB' 'Inactive: 1267260 kB' 'Active(anon): 131556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122732 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133008 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71820 kB' 'KernelStack: 6352 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.927 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.928 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9192276 kB' 'MemUsed: 3049696 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1267260 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1598584 kB' 'Mapped: 48596 kB' 'AnonPages: 123032 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61188 kB' 'Slab: 133008 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.929 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.930 18:54:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.930 node0=512 expecting 512 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:16.930 00:04:16.930 real 0m0.550s 00:04:16.930 user 0m0.265s 00:04:16.930 sys 0m0.319s 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.930 18:54:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.930 ************************************ 00:04:16.930 END TEST custom_alloc 00:04:16.930 ************************************ 00:04:16.930 18:54:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.930 18:54:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:16.930 18:54:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.930 18:54:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.930 18:54:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.930 ************************************ 00:04:16.930 START TEST no_shrink_alloc 00:04:16.930 ************************************ 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.930 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.189 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.189 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.189 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8144500 kB' 'MemAvailable: 9526272 kB' 'Buffers: 2436 kB' 'Cached: 1596144 kB' 'SwapCached: 0 kB' 'Active: 452908 kB' 'Inactive: 1267256 kB' 'Active(anon): 132048 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123256 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133020 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6324 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.190 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.453 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8144752 kB' 'MemAvailable: 9526528 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452744 kB' 'Inactive: 1267260 kB' 'Active(anon): 131884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133040 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.454 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.455 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.456 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8144752 kB' 'MemAvailable: 9526528 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452444 kB' 'Inactive: 1267260 kB' 'Active(anon): 131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133036 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71848 kB' 'KernelStack: 6352 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.457 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.458 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.459 nr_hugepages=1024 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.459 resv_hugepages=0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.459 surplus_hugepages=0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.459 anon_hugepages=0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8144752 kB' 'MemAvailable: 9526528 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1267260 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122944 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 61188 kB' 'Slab: 133036 kB' 'SReclaimable: 61188 kB' 'SUnreclaim: 71848 kB' 'KernelStack: 6336 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.459 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.460 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.461 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146640 kB' 'MemUsed: 4095332 kB' 'SwapCached: 0 kB' 'Active: 448020 kB' 'Inactive: 1267260 kB' 'Active(anon): 127160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1598584 kB' 'Mapped: 47980 kB' 'AnonPages: 118344 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61184 kB' 'Slab: 132992 kB' 'SReclaimable: 61184 kB' 'SUnreclaim: 71808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.462 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.463 node0=1024 expecting 1024 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.463 18:54:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.723 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.723 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.723 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.723 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146636 kB' 'MemAvailable: 9528404 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 448392 kB' 'Inactive: 1267260 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118684 kB' 'Mapped: 47932 kB' 'Shmem: 10464 kB' 'KReclaimable: 61172 kB' 'Slab: 132812 kB' 'SReclaimable: 61172 kB' 'SUnreclaim: 71640 kB' 'KernelStack: 6336 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.988 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146384 kB' 'MemAvailable: 9528152 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 448024 kB' 'Inactive: 1267260 kB' 'Active(anon): 127164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118324 kB' 'Mapped: 47860 kB' 'Shmem: 10464 kB' 'KReclaimable: 61172 kB' 'Slab: 132800 kB' 'SReclaimable: 61172 kB' 'SUnreclaim: 71628 kB' 'KernelStack: 6320 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.989 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146384 kB' 'MemAvailable: 9528152 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 447940 kB' 'Inactive: 1267260 kB' 'Active(anon): 127080 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 47860 kB' 'Shmem: 10464 kB' 'KReclaimable: 61172 kB' 'Slab: 132800 kB' 'SReclaimable: 61172 kB' 'SUnreclaim: 71628 kB' 'KernelStack: 6304 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.990 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.991 nr_hugepages=1024 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.991 resv_hugepages=0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.991 surplus_hugepages=0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.991 anon_hugepages=0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146384 kB' 'MemAvailable: 9528152 kB' 'Buffers: 2436 kB' 'Cached: 1596148 kB' 'SwapCached: 0 kB' 'Active: 447756 kB' 'Inactive: 1267260 kB' 'Active(anon): 126896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118064 kB' 'Mapped: 47860 kB' 'Shmem: 10464 kB' 'KReclaimable: 61172 kB' 'Slab: 132800 kB' 'SReclaimable: 61172 kB' 'SUnreclaim: 71628 kB' 'KernelStack: 6320 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.991 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8146384 kB' 'MemUsed: 4095588 kB' 'SwapCached: 0 kB' 'Active: 447756 kB' 'Inactive: 1267260 kB' 'Active(anon): 126896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1267260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1598584 kB' 'Mapped: 47860 kB' 'AnonPages: 118064 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61172 kB' 'Slab: 132800 kB' 'SReclaimable: 61172 kB' 'SUnreclaim: 71628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.992 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.993 node0=1024 expecting 1024 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.993 00:04:17.993 real 0m1.119s 00:04:17.993 user 0m0.510s 00:04:17.993 sys 0m0.623s 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.993 18:54:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.993 ************************************ 00:04:17.993 END TEST no_shrink_alloc 00:04:17.993 ************************************ 00:04:17.993 18:54:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.993 18:54:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.993 00:04:17.993 real 0m4.842s 00:04:17.993 user 0m2.226s 00:04:17.993 sys 0m2.634s 00:04:17.993 18:54:45 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.993 18:54:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.993 ************************************ 00:04:17.993 END TEST hugepages 00:04:17.993 ************************************ 00:04:17.993 18:54:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.993 18:54:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:17.993 18:54:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.993 18:54:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.993 18:54:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.252 ************************************ 00:04:18.252 START TEST driver 00:04:18.252 ************************************ 00:04:18.252 18:54:45 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:18.252 * Looking for test storage... 00:04:18.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.252 18:54:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:18.252 18:54:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.252 18:54:45 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.819 18:54:45 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:18.819 18:54:45 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.819 18:54:45 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.819 18:54:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.819 ************************************ 00:04:18.819 START TEST guess_driver 00:04:18.819 ************************************ 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:18.819 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:18.819 Looking for driver=uio_pci_generic 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.819 18:54:45 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.386 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:19.386 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:19.386 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.645 18:54:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.242 00:04:20.242 real 0m1.481s 00:04:20.242 user 0m0.526s 00:04:20.242 sys 0m0.963s 00:04:20.242 18:54:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.242 ************************************ 00:04:20.242 END TEST guess_driver 00:04:20.242 18:54:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.242 ************************************ 00:04:20.242 18:54:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:20.242 00:04:20.242 real 0m2.197s 00:04:20.242 user 0m0.772s 00:04:20.242 sys 0m1.486s 00:04:20.242 ************************************ 00:04:20.242 END TEST driver 00:04:20.242 ************************************ 00:04:20.242 18:54:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.242 18:54:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.242 18:54:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:20.242 18:54:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.242 18:54:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.242 18:54:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.242 18:54:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.242 ************************************ 00:04:20.242 START TEST devices 00:04:20.242 ************************************ 00:04:20.242 18:54:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:20.500 * Looking for test storage... 00:04:20.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.500 18:54:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.500 18:54:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:20.500 18:54:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.500 18:54:47 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.437 18:54:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:21.437 18:54:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:21.438 18:54:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:21.438 No valid GPT data, bailing 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:21.438 No valid GPT data, bailing 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:21.438 No valid GPT data, bailing 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:21.438 No valid GPT data, bailing 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.438 18:54:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.438 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:21.438 18:54:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:21.439 18:54:48 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:21.439 18:54:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:21.439 18:54:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.439 18:54:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.439 18:54:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.439 ************************************ 00:04:21.439 START TEST nvme_mount 00:04:21.439 ************************************ 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.439 18:54:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.816 Creating new GPT entries in memory. 00:04:22.816 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.816 other utilities. 00:04:22.816 18:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.816 18:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.816 18:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.816 18:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.816 18:54:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:23.751 Creating new GPT entries in memory. 00:04:23.751 The operation has completed successfully. 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56994 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.751 18:54:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.751 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.010 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.010 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.010 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.010 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.269 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.269 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.529 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.529 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:24.529 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.529 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.529 18:54:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.788 18:54:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.788 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:24.788 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.046 18:54:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.047 18:54:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.305 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.563 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.563 ************************************ 00:04:25.563 END TEST nvme_mount 00:04:25.563 ************************************ 00:04:25.563 00:04:25.563 real 0m4.044s 00:04:25.563 user 0m0.711s 00:04:25.563 sys 0m1.073s 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.563 18:54:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 18:54:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:25.563 18:54:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.563 18:54:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.563 18:54:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.563 18:54:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.563 ************************************ 00:04:25.563 START TEST dm_mount 00:04:25.563 ************************************ 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.563 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.564 18:54:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:26.979 Creating new GPT entries in memory. 00:04:26.979 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.979 other utilities. 00:04:26.979 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.979 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.979 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.979 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.979 18:54:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:27.916 Creating new GPT entries in memory. 00:04:27.916 The operation has completed successfully. 00:04:27.916 18:54:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.916 18:54:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.916 18:54:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.916 18:54:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.916 18:54:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:28.853 The operation has completed successfully. 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57428 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.853 18:54:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.113 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.373 18:54:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.631 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.632 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.890 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.890 18:54:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:29.890 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:29.890 00:04:29.890 real 0m4.300s 00:04:29.890 user 0m0.505s 00:04:29.890 sys 0m0.737s 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.890 18:54:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:29.890 ************************************ 00:04:29.890 END TEST dm_mount 00:04:29.890 ************************************ 00:04:29.890 18:54:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.890 18:54:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.458 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.458 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.458 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.458 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.458 18:54:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:30.458 ************************************ 00:04:30.458 END TEST devices 00:04:30.458 ************************************ 00:04:30.458 00:04:30.458 real 0m9.925s 00:04:30.458 user 0m1.877s 00:04:30.458 sys 0m2.445s 00:04:30.458 18:54:57 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.458 18:54:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:30.458 18:54:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.458 00:04:30.458 real 0m22.121s 00:04:30.458 user 0m7.111s 00:04:30.458 sys 0m9.405s 00:04:30.458 18:54:57 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.458 18:54:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.458 ************************************ 00:04:30.458 END TEST setup.sh 00:04:30.458 ************************************ 00:04:30.458 18:54:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.458 18:54:57 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:31.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.025 Hugepages 00:04:31.025 node hugesize free / total 00:04:31.025 node0 1048576kB 0 / 0 00:04:31.025 node0 2048kB 2048 / 2048 00:04:31.025 00:04:31.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.025 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:31.284 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:31.284 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:31.284 18:54:58 -- spdk/autotest.sh@130 -- # uname -s 00:04:31.284 18:54:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:31.284 18:54:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:31.284 18:54:58 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.222 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.222 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.222 18:54:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:33.159 18:55:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:33.159 18:55:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:33.159 18:55:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.159 18:55:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:33.159 18:55:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.159 18:55:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.159 18:55:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.159 18:55:00 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.159 18:55:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:33.418 18:55:00 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:33.418 18:55:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:33.418 18:55:00 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.677 Waiting for block devices as requested 00:04:33.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.936 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.936 18:55:01 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:33.936 18:55:01 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:33.936 18:55:01 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:33.936 18:55:01 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:33.936 18:55:01 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1557 -- # continue 00:04:33.936 18:55:01 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:33.936 18:55:01 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:33.936 18:55:01 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:33.936 18:55:01 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:33.936 18:55:01 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:33.936 18:55:01 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:33.936 18:55:01 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:33.936 18:55:01 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:33.936 18:55:01 -- common/autotest_common.sh@1557 -- # continue 00:04:33.936 18:55:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.936 18:55:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.936 18:55:01 -- common/autotest_common.sh@10 -- # set +x 00:04:33.936 18:55:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.936 18:55:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.936 18:55:01 -- common/autotest_common.sh@10 -- # set +x 00:04:33.936 18:55:01 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.872 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.872 18:55:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:34.872 18:55:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.872 18:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:34.872 18:55:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:34.872 18:55:02 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:34.872 18:55:02 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:34.872 18:55:02 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:34.872 18:55:02 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:34.872 18:55:02 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:34.872 18:55:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:34.872 18:55:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:34.872 18:55:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.872 18:55:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.872 18:55:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:35.130 18:55:02 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:35.130 18:55:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:35.130 18:55:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:35.130 18:55:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:35.130 18:55:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:35.130 18:55:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.130 18:55:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:35.130 18:55:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:35.130 18:55:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:35.130 18:55:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.130 18:55:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:35.130 18:55:02 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:35.130 18:55:02 -- common/autotest_common.sh@1593 -- # return 0 00:04:35.130 18:55:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:35.130 18:55:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:35.130 18:55:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.130 18:55:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.130 18:55:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:35.130 18:55:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.130 18:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:35.130 18:55:02 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:35.130 18:55:02 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:35.130 18:55:02 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:35.130 18:55:02 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.130 18:55:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.130 18:55:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.130 18:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:35.130 ************************************ 00:04:35.130 START TEST env 00:04:35.130 ************************************ 00:04:35.130 18:55:02 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.130 * Looking for test storage... 00:04:35.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:35.130 18:55:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.130 18:55:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.130 18:55:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.130 18:55:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.130 ************************************ 00:04:35.130 START TEST env_memory 00:04:35.130 ************************************ 00:04:35.130 18:55:02 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.130 00:04:35.130 00:04:35.130 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.130 http://cunit.sourceforge.net/ 00:04:35.130 00:04:35.130 00:04:35.130 Suite: memory 00:04:35.130 Test: alloc and free memory map ...[2024-07-15 18:55:02.371565] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.130 passed 00:04:35.131 Test: mem map translation ...[2024-07-15 18:55:02.403384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.131 [2024-07-15 18:55:02.403470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.131 [2024-07-15 18:55:02.403575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.131 [2024-07-15 18:55:02.403596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.389 passed 00:04:35.389 Test: mem map registration ...[2024-07-15 18:55:02.468434] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:35.389 [2024-07-15 18:55:02.468523] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:35.389 passed 00:04:35.389 Test: mem map adjacent registrations ...passed 00:04:35.389 00:04:35.389 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.389 suites 1 1 n/a 0 0 00:04:35.389 tests 4 4 4 0 0 00:04:35.389 asserts 152 152 152 0 n/a 00:04:35.389 00:04:35.389 Elapsed time = 0.217 seconds 00:04:35.389 00:04:35.389 real 0m0.233s 00:04:35.389 user 0m0.214s 00:04:35.389 sys 0m0.016s 00:04:35.389 18:55:02 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.389 18:55:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.389 ************************************ 00:04:35.389 END TEST env_memory 00:04:35.389 ************************************ 00:04:35.389 18:55:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.389 18:55:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.389 18:55:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.389 18:55:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.389 18:55:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.389 ************************************ 00:04:35.389 START TEST env_vtophys 00:04:35.389 ************************************ 00:04:35.389 18:55:02 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.389 EAL: lib.eal log level changed from notice to debug 00:04:35.389 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 1 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 2 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 3 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 4 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 5 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 6 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 7 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 8 as core 0 on socket 0 00:04:35.389 EAL: Detected lcore 9 as core 0 on socket 0 00:04:35.389 EAL: Maximum logical cores by configuration: 128 00:04:35.389 EAL: Detected CPU lcores: 10 00:04:35.389 EAL: Detected NUMA nodes: 1 00:04:35.389 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.389 EAL: Detected shared linkage of DPDK 00:04:35.389 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.389 EAL: Selected IOVA mode 'PA' 00:04:35.389 EAL: Probing VFIO support... 00:04:35.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.389 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:35.389 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.389 EAL: Setting up physically contiguous memory... 00:04:35.389 EAL: Setting maximum number of open files to 524288 00:04:35.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.390 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.390 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.390 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.390 EAL: Hugepages will be freed exactly as allocated. 00:04:35.390 EAL: No shared files mode enabled, IPC is disabled 00:04:35.390 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: TSC frequency is ~2200000 KHz 00:04:35.649 EAL: Main lcore 0 is ready (tid=7fd38073ea00;cpuset=[0]) 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 0 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.649 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.649 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.649 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.649 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:35.649 00:04:35.649 00:04:35.649 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.649 http://cunit.sourceforge.net/ 00:04:35.649 00:04:35.649 00:04:35.649 Suite: components_suite 00:04:35.649 Test: vtophys_malloc_test ...passed 00:04:35.649 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.649 EAL: Restoring previous memory policy: 4 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.649 EAL: request: mp_malloc_sync 00:04:35.649 EAL: No shared files mode enabled, IPC is disabled 00:04:35.649 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.649 EAL: Trying to obtain current memory policy. 00:04:35.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.908 EAL: Restoring previous memory policy: 4 00:04:35.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.908 EAL: request: mp_malloc_sync 00:04:35.908 EAL: No shared files mode enabled, IPC is disabled 00:04:35.908 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.908 EAL: request: mp_malloc_sync 00:04:35.908 EAL: No shared files mode enabled, IPC is disabled 00:04:35.908 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.908 EAL: Trying to obtain current memory policy. 00:04:35.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.186 EAL: Restoring previous memory policy: 4 00:04:36.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.186 EAL: request: mp_malloc_sync 00:04:36.186 EAL: No shared files mode enabled, IPC is disabled 00:04:36.186 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.186 EAL: request: mp_malloc_sync 00:04:36.186 EAL: No shared files mode enabled, IPC is disabled 00:04:36.186 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.186 EAL: Trying to obtain current memory policy. 00:04:36.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.445 EAL: Restoring previous memory policy: 4 00:04:36.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.445 EAL: request: mp_malloc_sync 00:04:36.445 EAL: No shared files mode enabled, IPC is disabled 00:04:36.445 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.961 passed 00:04:36.961 00:04:36.961 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.961 suites 1 1 n/a 0 0 00:04:36.961 tests 2 2 2 0 0 00:04:36.961 asserts 5183 5183 5183 0 n/a 00:04:36.961 00:04:36.961 Elapsed time = 1.315 seconds 00:04:36.961 EAL: request: mp_malloc_sync 00:04:36.961 EAL: No shared files mode enabled, IPC is disabled 00:04:36.961 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.961 EAL: request: mp_malloc_sync 00:04:36.961 EAL: No shared files mode enabled, IPC is disabled 00:04:36.961 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.961 EAL: No shared files mode enabled, IPC is disabled 00:04:36.961 EAL: No shared files mode enabled, IPC is disabled 00:04:36.961 EAL: No shared files mode enabled, IPC is disabled 00:04:36.961 00:04:36.961 real 0m1.516s 00:04:36.961 user 0m0.828s 00:04:36.961 sys 0m0.553s 00:04:36.961 18:55:04 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.961 18:55:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.961 ************************************ 00:04:36.961 END TEST env_vtophys 00:04:36.961 ************************************ 00:04:36.961 18:55:04 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.961 18:55:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.961 18:55:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.961 18:55:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.961 18:55:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.961 ************************************ 00:04:36.961 START TEST env_pci 00:04:36.961 ************************************ 00:04:36.961 18:55:04 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.961 00:04:36.961 00:04:36.961 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.961 http://cunit.sourceforge.net/ 00:04:36.961 00:04:36.961 00:04:36.961 Suite: pci 00:04:36.961 Test: pci_hook ...[2024-07-15 18:55:04.208437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58621 has claimed it 00:04:36.961 passed 00:04:36.961 00:04:36.961 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.961 suites 1 1 n/a 0 0 00:04:36.961 tests 1 1 1 0 0 00:04:36.961 asserts 25 25 25 0 n/a 00:04:36.961 00:04:36.961 Elapsed time = 0.002 seconds 00:04:36.961 EAL: Cannot find device (10000:00:01.0) 00:04:36.961 EAL: Failed to attach device on primary process 00:04:36.961 00:04:36.961 real 0m0.022s 00:04:36.961 user 0m0.011s 00:04:36.961 sys 0m0.011s 00:04:36.961 18:55:04 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.961 18:55:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.961 ************************************ 00:04:36.961 END TEST env_pci 00:04:36.961 ************************************ 00:04:37.257 18:55:04 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.257 18:55:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.257 18:55:04 env -- env/env.sh@15 -- # uname 00:04:37.257 18:55:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.257 18:55:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.257 18:55:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.257 18:55:04 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:37.257 18:55:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.257 18:55:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.257 ************************************ 00:04:37.257 START TEST env_dpdk_post_init 00:04:37.257 ************************************ 00:04:37.257 18:55:04 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.257 EAL: Detected CPU lcores: 10 00:04:37.257 EAL: Detected NUMA nodes: 1 00:04:37.257 EAL: Detected shared linkage of DPDK 00:04:37.257 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.257 EAL: Selected IOVA mode 'PA' 00:04:37.257 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.257 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:37.257 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:37.257 Starting DPDK initialization... 00:04:37.257 Starting SPDK post initialization... 00:04:37.257 SPDK NVMe probe 00:04:37.257 Attaching to 0000:00:10.0 00:04:37.257 Attaching to 0000:00:11.0 00:04:37.257 Attached to 0000:00:10.0 00:04:37.257 Attached to 0000:00:11.0 00:04:37.257 Cleaning up... 00:04:37.257 00:04:37.257 real 0m0.179s 00:04:37.257 user 0m0.042s 00:04:37.257 sys 0m0.037s 00:04:37.258 18:55:04 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.258 18:55:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.258 ************************************ 00:04:37.258 END TEST env_dpdk_post_init 00:04:37.258 ************************************ 00:04:37.258 18:55:04 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.258 18:55:04 env -- env/env.sh@26 -- # uname 00:04:37.258 18:55:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.258 18:55:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.258 18:55:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.258 18:55:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.258 18:55:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.258 ************************************ 00:04:37.258 START TEST env_mem_callbacks 00:04:37.258 ************************************ 00:04:37.258 18:55:04 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.547 EAL: Detected CPU lcores: 10 00:04:37.547 EAL: Detected NUMA nodes: 1 00:04:37.547 EAL: Detected shared linkage of DPDK 00:04:37.547 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.547 EAL: Selected IOVA mode 'PA' 00:04:37.547 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.547 00:04:37.547 00:04:37.547 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.547 http://cunit.sourceforge.net/ 00:04:37.547 00:04:37.547 00:04:37.547 Suite: memory 00:04:37.547 Test: test ... 00:04:37.547 register 0x200000200000 2097152 00:04:37.547 malloc 3145728 00:04:37.547 register 0x200000400000 4194304 00:04:37.547 buf 0x200000500000 len 3145728 PASSED 00:04:37.547 malloc 64 00:04:37.547 buf 0x2000004fff40 len 64 PASSED 00:04:37.547 malloc 4194304 00:04:37.547 register 0x200000800000 6291456 00:04:37.547 buf 0x200000a00000 len 4194304 PASSED 00:04:37.547 free 0x200000500000 3145728 00:04:37.547 free 0x2000004fff40 64 00:04:37.547 unregister 0x200000400000 4194304 PASSED 00:04:37.547 free 0x200000a00000 4194304 00:04:37.547 unregister 0x200000800000 6291456 PASSED 00:04:37.547 malloc 8388608 00:04:37.547 register 0x200000400000 10485760 00:04:37.547 buf 0x200000600000 len 8388608 PASSED 00:04:37.547 free 0x200000600000 8388608 00:04:37.547 unregister 0x200000400000 10485760 PASSED 00:04:37.547 passed 00:04:37.547 00:04:37.547 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.547 suites 1 1 n/a 0 0 00:04:37.547 tests 1 1 1 0 0 00:04:37.547 asserts 15 15 15 0 n/a 00:04:37.547 00:04:37.547 Elapsed time = 0.010 seconds 00:04:37.547 00:04:37.547 real 0m0.145s 00:04:37.547 user 0m0.018s 00:04:37.547 sys 0m0.025s 00:04:37.547 18:55:04 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.547 18:55:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.547 ************************************ 00:04:37.547 END TEST env_mem_callbacks 00:04:37.547 ************************************ 00:04:37.547 18:55:04 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.547 00:04:37.547 real 0m2.495s 00:04:37.548 user 0m1.227s 00:04:37.548 sys 0m0.892s 00:04:37.548 18:55:04 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.548 18:55:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.548 ************************************ 00:04:37.548 END TEST env 00:04:37.548 ************************************ 00:04:37.548 18:55:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.548 18:55:04 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.548 18:55:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.548 18:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.548 18:55:04 -- common/autotest_common.sh@10 -- # set +x 00:04:37.548 ************************************ 00:04:37.548 START TEST rpc 00:04:37.548 ************************************ 00:04:37.548 18:55:04 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.804 * Looking for test storage... 00:04:37.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.804 18:55:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58731 00:04:37.804 18:55:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:37.804 18:55:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.804 18:55:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58731 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@829 -- # '[' -z 58731 ']' 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.804 18:55:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.804 [2024-07-15 18:55:04.935718] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:37.804 [2024-07-15 18:55:04.936356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58731 ] 00:04:37.804 [2024-07-15 18:55:05.077766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.061 [2024-07-15 18:55:05.228644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.061 [2024-07-15 18:55:05.228727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58731' to capture a snapshot of events at runtime. 00:04:38.061 [2024-07-15 18:55:05.228742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.061 [2024-07-15 18:55:05.228753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.061 [2024-07-15 18:55:05.228762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58731 for offline analysis/debug. 00:04:38.061 [2024-07-15 18:55:05.228802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.061 [2024-07-15 18:55:05.308589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:38.993 18:55:05 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.993 18:55:05 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:38.993 18:55:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.993 18:55:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.993 18:55:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.993 18:55:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.993 18:55:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.993 18:55:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.993 18:55:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 ************************************ 00:04:38.993 START TEST rpc_integrity 00:04:38.993 ************************************ 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 18:55:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.993 18:55:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.993 { 00:04:38.993 "name": "Malloc0", 00:04:38.993 "aliases": [ 00:04:38.993 "269606f3-b798-40be-a4f4-f5a4c07a90c0" 00:04:38.993 ], 00:04:38.993 "product_name": "Malloc disk", 00:04:38.993 "block_size": 512, 00:04:38.993 "num_blocks": 16384, 00:04:38.993 "uuid": "269606f3-b798-40be-a4f4-f5a4c07a90c0", 00:04:38.993 "assigned_rate_limits": { 00:04:38.993 "rw_ios_per_sec": 0, 00:04:38.993 "rw_mbytes_per_sec": 0, 00:04:38.993 "r_mbytes_per_sec": 0, 00:04:38.993 "w_mbytes_per_sec": 0 00:04:38.993 }, 00:04:38.993 "claimed": false, 00:04:38.993 "zoned": false, 00:04:38.993 "supported_io_types": { 00:04:38.993 "read": true, 00:04:38.993 "write": true, 00:04:38.993 "unmap": true, 00:04:38.993 "flush": true, 00:04:38.993 "reset": true, 00:04:38.993 "nvme_admin": false, 00:04:38.993 "nvme_io": false, 00:04:38.993 "nvme_io_md": false, 00:04:38.993 "write_zeroes": true, 00:04:38.993 "zcopy": true, 00:04:38.993 "get_zone_info": false, 00:04:38.993 "zone_management": false, 00:04:38.993 "zone_append": false, 00:04:38.993 "compare": false, 00:04:38.993 "compare_and_write": false, 00:04:38.993 "abort": true, 00:04:38.993 "seek_hole": false, 00:04:38.993 "seek_data": false, 00:04:38.993 "copy": true, 00:04:38.993 "nvme_iov_md": false 00:04:38.993 }, 00:04:38.993 "memory_domains": [ 00:04:38.993 { 00:04:38.993 "dma_device_id": "system", 00:04:38.993 "dma_device_type": 1 00:04:38.993 }, 00:04:38.993 { 00:04:38.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.993 "dma_device_type": 2 00:04:38.993 } 00:04:38.993 ], 00:04:38.993 "driver_specific": {} 00:04:38.993 } 00:04:38.993 ]' 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.993 [2024-07-15 18:55:06.083858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.993 [2024-07-15 18:55:06.083914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.993 [2024-07-15 18:55:06.083934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f54d0 00:04:38.993 [2024-07-15 18:55:06.083943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.993 [2024-07-15 18:55:06.085707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.993 [2024-07-15 18:55:06.085745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.993 Passthru0 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.993 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.993 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.994 { 00:04:38.994 "name": "Malloc0", 00:04:38.994 "aliases": [ 00:04:38.994 "269606f3-b798-40be-a4f4-f5a4c07a90c0" 00:04:38.994 ], 00:04:38.994 "product_name": "Malloc disk", 00:04:38.994 "block_size": 512, 00:04:38.994 "num_blocks": 16384, 00:04:38.994 "uuid": "269606f3-b798-40be-a4f4-f5a4c07a90c0", 00:04:38.994 "assigned_rate_limits": { 00:04:38.994 "rw_ios_per_sec": 0, 00:04:38.994 "rw_mbytes_per_sec": 0, 00:04:38.994 "r_mbytes_per_sec": 0, 00:04:38.994 "w_mbytes_per_sec": 0 00:04:38.994 }, 00:04:38.994 "claimed": true, 00:04:38.994 "claim_type": "exclusive_write", 00:04:38.994 "zoned": false, 00:04:38.994 "supported_io_types": { 00:04:38.994 "read": true, 00:04:38.994 "write": true, 00:04:38.994 "unmap": true, 00:04:38.994 "flush": true, 00:04:38.994 "reset": true, 00:04:38.994 "nvme_admin": false, 00:04:38.994 "nvme_io": false, 00:04:38.994 "nvme_io_md": false, 00:04:38.994 "write_zeroes": true, 00:04:38.994 "zcopy": true, 00:04:38.994 "get_zone_info": false, 00:04:38.994 "zone_management": false, 00:04:38.994 "zone_append": false, 00:04:38.994 "compare": false, 00:04:38.994 "compare_and_write": false, 00:04:38.994 "abort": true, 00:04:38.994 "seek_hole": false, 00:04:38.994 "seek_data": false, 00:04:38.994 "copy": true, 00:04:38.994 "nvme_iov_md": false 00:04:38.994 }, 00:04:38.994 "memory_domains": [ 00:04:38.994 { 00:04:38.994 "dma_device_id": "system", 00:04:38.994 "dma_device_type": 1 00:04:38.994 }, 00:04:38.994 { 00:04:38.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.994 "dma_device_type": 2 00:04:38.994 } 00:04:38.994 ], 00:04:38.994 "driver_specific": {} 00:04:38.994 }, 00:04:38.994 { 00:04:38.994 "name": "Passthru0", 00:04:38.994 "aliases": [ 00:04:38.994 "2d1fb86a-3cdd-5b1b-bdb2-0daeff6ee016" 00:04:38.994 ], 00:04:38.994 "product_name": "passthru", 00:04:38.994 "block_size": 512, 00:04:38.994 "num_blocks": 16384, 00:04:38.994 "uuid": "2d1fb86a-3cdd-5b1b-bdb2-0daeff6ee016", 00:04:38.994 "assigned_rate_limits": { 00:04:38.994 "rw_ios_per_sec": 0, 00:04:38.994 "rw_mbytes_per_sec": 0, 00:04:38.994 "r_mbytes_per_sec": 0, 00:04:38.994 "w_mbytes_per_sec": 0 00:04:38.994 }, 00:04:38.994 "claimed": false, 00:04:38.994 "zoned": false, 00:04:38.994 "supported_io_types": { 00:04:38.994 "read": true, 00:04:38.994 "write": true, 00:04:38.994 "unmap": true, 00:04:38.994 "flush": true, 00:04:38.994 "reset": true, 00:04:38.994 "nvme_admin": false, 00:04:38.994 "nvme_io": false, 00:04:38.994 "nvme_io_md": false, 00:04:38.994 "write_zeroes": true, 00:04:38.994 "zcopy": true, 00:04:38.994 "get_zone_info": false, 00:04:38.994 "zone_management": false, 00:04:38.994 "zone_append": false, 00:04:38.994 "compare": false, 00:04:38.994 "compare_and_write": false, 00:04:38.994 "abort": true, 00:04:38.994 "seek_hole": false, 00:04:38.994 "seek_data": false, 00:04:38.994 "copy": true, 00:04:38.994 "nvme_iov_md": false 00:04:38.994 }, 00:04:38.994 "memory_domains": [ 00:04:38.994 { 00:04:38.994 "dma_device_id": "system", 00:04:38.994 "dma_device_type": 1 00:04:38.994 }, 00:04:38.994 { 00:04:38.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.994 "dma_device_type": 2 00:04:38.994 } 00:04:38.994 ], 00:04:38.994 "driver_specific": { 00:04:38.994 "passthru": { 00:04:38.994 "name": "Passthru0", 00:04:38.994 "base_bdev_name": "Malloc0" 00:04:38.994 } 00:04:38.994 } 00:04:38.994 } 00:04:38.994 ]' 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.994 18:55:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.994 00:04:38.994 real 0m0.327s 00:04:38.994 user 0m0.212s 00:04:38.994 sys 0m0.044s 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.994 18:55:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.994 ************************************ 00:04:38.994 END TEST rpc_integrity 00:04:38.994 ************************************ 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.252 18:55:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 ************************************ 00:04:39.252 START TEST rpc_plugins 00:04:39.252 ************************************ 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.252 { 00:04:39.252 "name": "Malloc1", 00:04:39.252 "aliases": [ 00:04:39.252 "8970062c-ab02-4177-8d13-e906311214d0" 00:04:39.252 ], 00:04:39.252 "product_name": "Malloc disk", 00:04:39.252 "block_size": 4096, 00:04:39.252 "num_blocks": 256, 00:04:39.252 "uuid": "8970062c-ab02-4177-8d13-e906311214d0", 00:04:39.252 "assigned_rate_limits": { 00:04:39.252 "rw_ios_per_sec": 0, 00:04:39.252 "rw_mbytes_per_sec": 0, 00:04:39.252 "r_mbytes_per_sec": 0, 00:04:39.252 "w_mbytes_per_sec": 0 00:04:39.252 }, 00:04:39.252 "claimed": false, 00:04:39.252 "zoned": false, 00:04:39.252 "supported_io_types": { 00:04:39.252 "read": true, 00:04:39.252 "write": true, 00:04:39.252 "unmap": true, 00:04:39.252 "flush": true, 00:04:39.252 "reset": true, 00:04:39.252 "nvme_admin": false, 00:04:39.252 "nvme_io": false, 00:04:39.252 "nvme_io_md": false, 00:04:39.252 "write_zeroes": true, 00:04:39.252 "zcopy": true, 00:04:39.252 "get_zone_info": false, 00:04:39.252 "zone_management": false, 00:04:39.252 "zone_append": false, 00:04:39.252 "compare": false, 00:04:39.252 "compare_and_write": false, 00:04:39.252 "abort": true, 00:04:39.252 "seek_hole": false, 00:04:39.252 "seek_data": false, 00:04:39.252 "copy": true, 00:04:39.252 "nvme_iov_md": false 00:04:39.252 }, 00:04:39.252 "memory_domains": [ 00:04:39.252 { 00:04:39.252 "dma_device_id": "system", 00:04:39.252 "dma_device_type": 1 00:04:39.252 }, 00:04:39.252 { 00:04:39.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.252 "dma_device_type": 2 00:04:39.252 } 00:04:39.252 ], 00:04:39.252 "driver_specific": {} 00:04:39.252 } 00:04:39.252 ]' 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.252 18:55:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.252 00:04:39.252 real 0m0.168s 00:04:39.252 user 0m0.112s 00:04:39.252 sys 0m0.018s 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 ************************************ 00:04:39.252 END TEST rpc_plugins 00:04:39.252 ************************************ 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.252 18:55:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.252 18:55:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.252 ************************************ 00:04:39.252 START TEST rpc_trace_cmd_test 00:04:39.252 ************************************ 00:04:39.252 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:39.252 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.252 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.252 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.252 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.510 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58731", 00:04:39.510 "tpoint_group_mask": "0x8", 00:04:39.510 "iscsi_conn": { 00:04:39.510 "mask": "0x2", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "scsi": { 00:04:39.510 "mask": "0x4", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "bdev": { 00:04:39.510 "mask": "0x8", 00:04:39.510 "tpoint_mask": "0xffffffffffffffff" 00:04:39.510 }, 00:04:39.510 "nvmf_rdma": { 00:04:39.510 "mask": "0x10", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "nvmf_tcp": { 00:04:39.510 "mask": "0x20", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "ftl": { 00:04:39.510 "mask": "0x40", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "blobfs": { 00:04:39.510 "mask": "0x80", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "dsa": { 00:04:39.510 "mask": "0x200", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "thread": { 00:04:39.510 "mask": "0x400", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "nvme_pcie": { 00:04:39.510 "mask": "0x800", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "iaa": { 00:04:39.510 "mask": "0x1000", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "nvme_tcp": { 00:04:39.510 "mask": "0x2000", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "bdev_nvme": { 00:04:39.510 "mask": "0x4000", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 }, 00:04:39.510 "sock": { 00:04:39.510 "mask": "0x8000", 00:04:39.510 "tpoint_mask": "0x0" 00:04:39.510 } 00:04:39.510 }' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.510 00:04:39.510 real 0m0.261s 00:04:39.510 user 0m0.223s 00:04:39.510 sys 0m0.030s 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.510 ************************************ 00:04:39.510 END TEST rpc_trace_cmd_test 00:04:39.510 18:55:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.510 ************************************ 00:04:39.768 18:55:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.768 18:55:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.768 18:55:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.768 18:55:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.768 18:55:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.768 18:55:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.768 18:55:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.768 ************************************ 00:04:39.768 START TEST rpc_daemon_integrity 00:04:39.768 ************************************ 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.768 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.768 { 00:04:39.768 "name": "Malloc2", 00:04:39.768 "aliases": [ 00:04:39.768 "754a7ec7-ef54-47ee-8446-e4ad25248087" 00:04:39.768 ], 00:04:39.768 "product_name": "Malloc disk", 00:04:39.768 "block_size": 512, 00:04:39.768 "num_blocks": 16384, 00:04:39.768 "uuid": "754a7ec7-ef54-47ee-8446-e4ad25248087", 00:04:39.768 "assigned_rate_limits": { 00:04:39.768 "rw_ios_per_sec": 0, 00:04:39.768 "rw_mbytes_per_sec": 0, 00:04:39.768 "r_mbytes_per_sec": 0, 00:04:39.768 "w_mbytes_per_sec": 0 00:04:39.768 }, 00:04:39.768 "claimed": false, 00:04:39.768 "zoned": false, 00:04:39.768 "supported_io_types": { 00:04:39.768 "read": true, 00:04:39.768 "write": true, 00:04:39.768 "unmap": true, 00:04:39.768 "flush": true, 00:04:39.768 "reset": true, 00:04:39.768 "nvme_admin": false, 00:04:39.768 "nvme_io": false, 00:04:39.768 "nvme_io_md": false, 00:04:39.768 "write_zeroes": true, 00:04:39.768 "zcopy": true, 00:04:39.768 "get_zone_info": false, 00:04:39.768 "zone_management": false, 00:04:39.768 "zone_append": false, 00:04:39.768 "compare": false, 00:04:39.768 "compare_and_write": false, 00:04:39.768 "abort": true, 00:04:39.768 "seek_hole": false, 00:04:39.768 "seek_data": false, 00:04:39.768 "copy": true, 00:04:39.768 "nvme_iov_md": false 00:04:39.768 }, 00:04:39.768 "memory_domains": [ 00:04:39.768 { 00:04:39.768 "dma_device_id": "system", 00:04:39.768 "dma_device_type": 1 00:04:39.768 }, 00:04:39.768 { 00:04:39.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.768 "dma_device_type": 2 00:04:39.768 } 00:04:39.768 ], 00:04:39.768 "driver_specific": {} 00:04:39.768 } 00:04:39.768 ]' 00:04:39.769 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.769 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.769 18:55:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.769 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.769 18:55:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.769 [2024-07-15 18:55:07.002698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.769 [2024-07-15 18:55:07.002750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.769 [2024-07-15 18:55:07.002772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ad3a0 00:04:39.769 [2024-07-15 18:55:07.002782] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.769 [2024-07-15 18:55:07.004214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.769 [2024-07-15 18:55:07.004292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.769 Passthru0 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.769 { 00:04:39.769 "name": "Malloc2", 00:04:39.769 "aliases": [ 00:04:39.769 "754a7ec7-ef54-47ee-8446-e4ad25248087" 00:04:39.769 ], 00:04:39.769 "product_name": "Malloc disk", 00:04:39.769 "block_size": 512, 00:04:39.769 "num_blocks": 16384, 00:04:39.769 "uuid": "754a7ec7-ef54-47ee-8446-e4ad25248087", 00:04:39.769 "assigned_rate_limits": { 00:04:39.769 "rw_ios_per_sec": 0, 00:04:39.769 "rw_mbytes_per_sec": 0, 00:04:39.769 "r_mbytes_per_sec": 0, 00:04:39.769 "w_mbytes_per_sec": 0 00:04:39.769 }, 00:04:39.769 "claimed": true, 00:04:39.769 "claim_type": "exclusive_write", 00:04:39.769 "zoned": false, 00:04:39.769 "supported_io_types": { 00:04:39.769 "read": true, 00:04:39.769 "write": true, 00:04:39.769 "unmap": true, 00:04:39.769 "flush": true, 00:04:39.769 "reset": true, 00:04:39.769 "nvme_admin": false, 00:04:39.769 "nvme_io": false, 00:04:39.769 "nvme_io_md": false, 00:04:39.769 "write_zeroes": true, 00:04:39.769 "zcopy": true, 00:04:39.769 "get_zone_info": false, 00:04:39.769 "zone_management": false, 00:04:39.769 "zone_append": false, 00:04:39.769 "compare": false, 00:04:39.769 "compare_and_write": false, 00:04:39.769 "abort": true, 00:04:39.769 "seek_hole": false, 00:04:39.769 "seek_data": false, 00:04:39.769 "copy": true, 00:04:39.769 "nvme_iov_md": false 00:04:39.769 }, 00:04:39.769 "memory_domains": [ 00:04:39.769 { 00:04:39.769 "dma_device_id": "system", 00:04:39.769 "dma_device_type": 1 00:04:39.769 }, 00:04:39.769 { 00:04:39.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.769 "dma_device_type": 2 00:04:39.769 } 00:04:39.769 ], 00:04:39.769 "driver_specific": {} 00:04:39.769 }, 00:04:39.769 { 00:04:39.769 "name": "Passthru0", 00:04:39.769 "aliases": [ 00:04:39.769 "f467c87e-3ecc-50f9-99e3-6859d35e058e" 00:04:39.769 ], 00:04:39.769 "product_name": "passthru", 00:04:39.769 "block_size": 512, 00:04:39.769 "num_blocks": 16384, 00:04:39.769 "uuid": "f467c87e-3ecc-50f9-99e3-6859d35e058e", 00:04:39.769 "assigned_rate_limits": { 00:04:39.769 "rw_ios_per_sec": 0, 00:04:39.769 "rw_mbytes_per_sec": 0, 00:04:39.769 "r_mbytes_per_sec": 0, 00:04:39.769 "w_mbytes_per_sec": 0 00:04:39.769 }, 00:04:39.769 "claimed": false, 00:04:39.769 "zoned": false, 00:04:39.769 "supported_io_types": { 00:04:39.769 "read": true, 00:04:39.769 "write": true, 00:04:39.769 "unmap": true, 00:04:39.769 "flush": true, 00:04:39.769 "reset": true, 00:04:39.769 "nvme_admin": false, 00:04:39.769 "nvme_io": false, 00:04:39.769 "nvme_io_md": false, 00:04:39.769 "write_zeroes": true, 00:04:39.769 "zcopy": true, 00:04:39.769 "get_zone_info": false, 00:04:39.769 "zone_management": false, 00:04:39.769 "zone_append": false, 00:04:39.769 "compare": false, 00:04:39.769 "compare_and_write": false, 00:04:39.769 "abort": true, 00:04:39.769 "seek_hole": false, 00:04:39.769 "seek_data": false, 00:04:39.769 "copy": true, 00:04:39.769 "nvme_iov_md": false 00:04:39.769 }, 00:04:39.769 "memory_domains": [ 00:04:39.769 { 00:04:39.769 "dma_device_id": "system", 00:04:39.769 "dma_device_type": 1 00:04:39.769 }, 00:04:39.769 { 00:04:39.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.769 "dma_device_type": 2 00:04:39.769 } 00:04:39.769 ], 00:04:39.769 "driver_specific": { 00:04:39.769 "passthru": { 00:04:39.769 "name": "Passthru0", 00:04:39.769 "base_bdev_name": "Malloc2" 00:04:39.769 } 00:04:39.769 } 00:04:39.769 } 00:04:39.769 ]' 00:04:39.769 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.028 00:04:40.028 real 0m0.324s 00:04:40.028 user 0m0.210s 00:04:40.028 sys 0m0.044s 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.028 ************************************ 00:04:40.028 END TEST rpc_daemon_integrity 00:04:40.028 18:55:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.028 ************************************ 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.028 18:55:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.028 18:55:07 rpc -- rpc/rpc.sh@84 -- # killprocess 58731 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@948 -- # '[' -z 58731 ']' 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@952 -- # kill -0 58731 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58731 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58731' 00:04:40.028 killing process with pid 58731 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@967 -- # kill 58731 00:04:40.028 18:55:07 rpc -- common/autotest_common.sh@972 -- # wait 58731 00:04:40.686 00:04:40.686 real 0m3.040s 00:04:40.686 user 0m3.779s 00:04:40.686 sys 0m0.785s 00:04:40.686 18:55:07 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.686 18:55:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.686 ************************************ 00:04:40.686 END TEST rpc 00:04:40.686 ************************************ 00:04:40.686 18:55:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.686 18:55:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.686 18:55:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.686 18:55:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.686 18:55:07 -- common/autotest_common.sh@10 -- # set +x 00:04:40.686 ************************************ 00:04:40.686 START TEST skip_rpc 00:04:40.686 ************************************ 00:04:40.686 18:55:07 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.686 * Looking for test storage... 00:04:40.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.686 18:55:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.686 18:55:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.686 18:55:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.686 18:55:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.686 18:55:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.686 18:55:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.686 ************************************ 00:04:40.686 START TEST skip_rpc 00:04:40.686 ************************************ 00:04:40.686 18:55:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:40.686 18:55:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58935 00:04:40.686 18:55:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.686 18:55:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.686 18:55:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.944 [2024-07-15 18:55:08.030721] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:40.944 [2024-07-15 18:55:08.030835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:04:40.944 [2024-07-15 18:55:08.172091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.201 [2024-07-15 18:55:08.321954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.201 [2024-07-15 18:55:08.406509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58935 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58935 ']' 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58935 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58935 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.460 killing process with pid 58935 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58935' 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58935 00:04:46.460 18:55:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58935 00:04:46.460 00:04:46.460 real 0m5.439s 00:04:46.460 user 0m4.933s 00:04:46.460 sys 0m0.385s 00:04:46.460 18:55:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.460 18:55:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.460 ************************************ 00:04:46.460 END TEST skip_rpc 00:04:46.460 ************************************ 00:04:46.460 18:55:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.460 18:55:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.460 18:55:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.460 18:55:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.460 18:55:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.460 ************************************ 00:04:46.460 START TEST skip_rpc_with_json 00:04:46.460 ************************************ 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59016 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59016 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59016 ']' 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.460 18:55:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.460 [2024-07-15 18:55:13.494530] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:46.460 [2024-07-15 18:55:13.494637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:04:46.460 [2024-07-15 18:55:13.625329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.717 [2024-07-15 18:55:13.763857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.717 [2024-07-15 18:55:13.818420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.360 [2024-07-15 18:55:14.514251] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.360 request: 00:04:47.360 { 00:04:47.360 "trtype": "tcp", 00:04:47.360 "method": "nvmf_get_transports", 00:04:47.360 "req_id": 1 00:04:47.360 } 00:04:47.360 Got JSON-RPC error response 00:04:47.360 response: 00:04:47.360 { 00:04:47.360 "code": -19, 00:04:47.360 "message": "No such device" 00:04:47.360 } 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.360 [2024-07-15 18:55:14.522369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.360 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.618 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.618 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.618 { 00:04:47.618 "subsystems": [ 00:04:47.618 { 00:04:47.618 "subsystem": "keyring", 00:04:47.618 "config": [] 00:04:47.618 }, 00:04:47.618 { 00:04:47.618 "subsystem": "iobuf", 00:04:47.618 "config": [ 00:04:47.618 { 00:04:47.618 "method": "iobuf_set_options", 00:04:47.618 "params": { 00:04:47.618 "small_pool_count": 8192, 00:04:47.618 "large_pool_count": 1024, 00:04:47.618 "small_bufsize": 8192, 00:04:47.618 "large_bufsize": 135168 00:04:47.618 } 00:04:47.618 } 00:04:47.618 ] 00:04:47.618 }, 00:04:47.618 { 00:04:47.618 "subsystem": "sock", 00:04:47.618 "config": [ 00:04:47.618 { 00:04:47.618 "method": "sock_set_default_impl", 00:04:47.618 "params": { 00:04:47.618 "impl_name": "uring" 00:04:47.618 } 00:04:47.618 }, 00:04:47.618 { 00:04:47.618 "method": "sock_impl_set_options", 00:04:47.618 "params": { 00:04:47.618 "impl_name": "ssl", 00:04:47.618 "recv_buf_size": 4096, 00:04:47.618 "send_buf_size": 4096, 00:04:47.618 "enable_recv_pipe": true, 00:04:47.618 "enable_quickack": false, 00:04:47.618 "enable_placement_id": 0, 00:04:47.618 "enable_zerocopy_send_server": true, 00:04:47.618 "enable_zerocopy_send_client": false, 00:04:47.618 "zerocopy_threshold": 0, 00:04:47.618 "tls_version": 0, 00:04:47.618 "enable_ktls": false 00:04:47.618 } 00:04:47.618 }, 00:04:47.618 { 00:04:47.618 "method": "sock_impl_set_options", 00:04:47.618 "params": { 00:04:47.618 "impl_name": "posix", 00:04:47.618 "recv_buf_size": 2097152, 00:04:47.618 "send_buf_size": 2097152, 00:04:47.618 "enable_recv_pipe": true, 00:04:47.618 "enable_quickack": false, 00:04:47.618 "enable_placement_id": 0, 00:04:47.618 "enable_zerocopy_send_server": true, 00:04:47.618 "enable_zerocopy_send_client": false, 00:04:47.618 "zerocopy_threshold": 0, 00:04:47.618 "tls_version": 0, 00:04:47.618 "enable_ktls": false 00:04:47.618 } 00:04:47.618 }, 00:04:47.618 { 00:04:47.618 "method": "sock_impl_set_options", 00:04:47.618 "params": { 00:04:47.618 "impl_name": "uring", 00:04:47.619 "recv_buf_size": 2097152, 00:04:47.619 "send_buf_size": 2097152, 00:04:47.619 "enable_recv_pipe": true, 00:04:47.619 "enable_quickack": false, 00:04:47.619 "enable_placement_id": 0, 00:04:47.619 "enable_zerocopy_send_server": false, 00:04:47.619 "enable_zerocopy_send_client": false, 00:04:47.619 "zerocopy_threshold": 0, 00:04:47.619 "tls_version": 0, 00:04:47.619 "enable_ktls": false 00:04:47.619 } 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "vmd", 00:04:47.619 "config": [] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "accel", 00:04:47.619 "config": [ 00:04:47.619 { 00:04:47.619 "method": "accel_set_options", 00:04:47.619 "params": { 00:04:47.619 "small_cache_size": 128, 00:04:47.619 "large_cache_size": 16, 00:04:47.619 "task_count": 2048, 00:04:47.619 "sequence_count": 2048, 00:04:47.619 "buf_count": 2048 00:04:47.619 } 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "bdev", 00:04:47.619 "config": [ 00:04:47.619 { 00:04:47.619 "method": "bdev_set_options", 00:04:47.619 "params": { 00:04:47.619 "bdev_io_pool_size": 65535, 00:04:47.619 "bdev_io_cache_size": 256, 00:04:47.619 "bdev_auto_examine": true, 00:04:47.619 "iobuf_small_cache_size": 128, 00:04:47.619 "iobuf_large_cache_size": 16 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "bdev_raid_set_options", 00:04:47.619 "params": { 00:04:47.619 "process_window_size_kb": 1024 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "bdev_iscsi_set_options", 00:04:47.619 "params": { 00:04:47.619 "timeout_sec": 30 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "bdev_nvme_set_options", 00:04:47.619 "params": { 00:04:47.619 "action_on_timeout": "none", 00:04:47.619 "timeout_us": 0, 00:04:47.619 "timeout_admin_us": 0, 00:04:47.619 "keep_alive_timeout_ms": 10000, 00:04:47.619 "arbitration_burst": 0, 00:04:47.619 "low_priority_weight": 0, 00:04:47.619 "medium_priority_weight": 0, 00:04:47.619 "high_priority_weight": 0, 00:04:47.619 "nvme_adminq_poll_period_us": 10000, 00:04:47.619 "nvme_ioq_poll_period_us": 0, 00:04:47.619 "io_queue_requests": 0, 00:04:47.619 "delay_cmd_submit": true, 00:04:47.619 "transport_retry_count": 4, 00:04:47.619 "bdev_retry_count": 3, 00:04:47.619 "transport_ack_timeout": 0, 00:04:47.619 "ctrlr_loss_timeout_sec": 0, 00:04:47.619 "reconnect_delay_sec": 0, 00:04:47.619 "fast_io_fail_timeout_sec": 0, 00:04:47.619 "disable_auto_failback": false, 00:04:47.619 "generate_uuids": false, 00:04:47.619 "transport_tos": 0, 00:04:47.619 "nvme_error_stat": false, 00:04:47.619 "rdma_srq_size": 0, 00:04:47.619 "io_path_stat": false, 00:04:47.619 "allow_accel_sequence": false, 00:04:47.619 "rdma_max_cq_size": 0, 00:04:47.619 "rdma_cm_event_timeout_ms": 0, 00:04:47.619 "dhchap_digests": [ 00:04:47.619 "sha256", 00:04:47.619 "sha384", 00:04:47.619 "sha512" 00:04:47.619 ], 00:04:47.619 "dhchap_dhgroups": [ 00:04:47.619 "null", 00:04:47.619 "ffdhe2048", 00:04:47.619 "ffdhe3072", 00:04:47.619 "ffdhe4096", 00:04:47.619 "ffdhe6144", 00:04:47.619 "ffdhe8192" 00:04:47.619 ] 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "bdev_nvme_set_hotplug", 00:04:47.619 "params": { 00:04:47.619 "period_us": 100000, 00:04:47.619 "enable": false 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "bdev_wait_for_examine" 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "scsi", 00:04:47.619 "config": null 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "scheduler", 00:04:47.619 "config": [ 00:04:47.619 { 00:04:47.619 "method": "framework_set_scheduler", 00:04:47.619 "params": { 00:04:47.619 "name": "static" 00:04:47.619 } 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "vhost_scsi", 00:04:47.619 "config": [] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "vhost_blk", 00:04:47.619 "config": [] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "ublk", 00:04:47.619 "config": [] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "nbd", 00:04:47.619 "config": [] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "nvmf", 00:04:47.619 "config": [ 00:04:47.619 { 00:04:47.619 "method": "nvmf_set_config", 00:04:47.619 "params": { 00:04:47.619 "discovery_filter": "match_any", 00:04:47.619 "admin_cmd_passthru": { 00:04:47.619 "identify_ctrlr": false 00:04:47.619 } 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "nvmf_set_max_subsystems", 00:04:47.619 "params": { 00:04:47.619 "max_subsystems": 1024 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "nvmf_set_crdt", 00:04:47.619 "params": { 00:04:47.619 "crdt1": 0, 00:04:47.619 "crdt2": 0, 00:04:47.619 "crdt3": 0 00:04:47.619 } 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "method": "nvmf_create_transport", 00:04:47.619 "params": { 00:04:47.619 "trtype": "TCP", 00:04:47.619 "max_queue_depth": 128, 00:04:47.619 "max_io_qpairs_per_ctrlr": 127, 00:04:47.619 "in_capsule_data_size": 4096, 00:04:47.619 "max_io_size": 131072, 00:04:47.619 "io_unit_size": 131072, 00:04:47.619 "max_aq_depth": 128, 00:04:47.619 "num_shared_buffers": 511, 00:04:47.619 "buf_cache_size": 4294967295, 00:04:47.619 "dif_insert_or_strip": false, 00:04:47.619 "zcopy": false, 00:04:47.619 "c2h_success": true, 00:04:47.619 "sock_priority": 0, 00:04:47.619 "abort_timeout_sec": 1, 00:04:47.619 "ack_timeout": 0, 00:04:47.619 "data_wr_pool_size": 0 00:04:47.619 } 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 }, 00:04:47.619 { 00:04:47.619 "subsystem": "iscsi", 00:04:47.619 "config": [ 00:04:47.619 { 00:04:47.619 "method": "iscsi_set_options", 00:04:47.619 "params": { 00:04:47.619 "node_base": "iqn.2016-06.io.spdk", 00:04:47.619 "max_sessions": 128, 00:04:47.619 "max_connections_per_session": 2, 00:04:47.619 "max_queue_depth": 64, 00:04:47.619 "default_time2wait": 2, 00:04:47.619 "default_time2retain": 20, 00:04:47.619 "first_burst_length": 8192, 00:04:47.619 "immediate_data": true, 00:04:47.619 "allow_duplicated_isid": false, 00:04:47.619 "error_recovery_level": 0, 00:04:47.619 "nop_timeout": 60, 00:04:47.619 "nop_in_interval": 30, 00:04:47.619 "disable_chap": false, 00:04:47.619 "require_chap": false, 00:04:47.619 "mutual_chap": false, 00:04:47.619 "chap_group": 0, 00:04:47.619 "max_large_datain_per_connection": 64, 00:04:47.619 "max_r2t_per_connection": 4, 00:04:47.619 "pdu_pool_size": 36864, 00:04:47.619 "immediate_data_pool_size": 16384, 00:04:47.619 "data_out_pool_size": 2048 00:04:47.619 } 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 } 00:04:47.619 ] 00:04:47.619 } 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59016 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59016 ']' 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59016 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59016 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.619 killing process with pid 59016 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59016' 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59016 00:04:47.619 18:55:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59016 00:04:47.877 18:55:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59049 00:04:47.877 18:55:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.877 18:55:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59049 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59049 ']' 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59049 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59049 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.161 killing process with pid 59049 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59049' 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59049 00:04:53.161 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59049 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.418 00:04:53.418 real 0m7.118s 00:04:53.418 user 0m6.853s 00:04:53.418 sys 0m0.654s 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.418 ************************************ 00:04:53.418 END TEST skip_rpc_with_json 00:04:53.418 ************************************ 00:04:53.418 18:55:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.418 18:55:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.418 18:55:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.418 18:55:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.418 18:55:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.418 ************************************ 00:04:53.418 START TEST skip_rpc_with_delay 00:04:53.418 ************************************ 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.418 [2024-07-15 18:55:20.643946] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.418 [2024-07-15 18:55:20.644091] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.418 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.418 00:04:53.418 real 0m0.079s 00:04:53.418 user 0m0.048s 00:04:53.418 sys 0m0.030s 00:04:53.419 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.419 18:55:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.419 ************************************ 00:04:53.419 END TEST skip_rpc_with_delay 00:04:53.419 ************************************ 00:04:53.419 18:55:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.419 18:55:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.419 18:55:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.419 18:55:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.419 18:55:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.419 18:55:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.419 18:55:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.419 ************************************ 00:04:53.419 START TEST exit_on_failed_rpc_init 00:04:53.419 ************************************ 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59153 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59153 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59153 ']' 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.419 18:55:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.676 [2024-07-15 18:55:20.778922] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:53.676 [2024-07-15 18:55:20.779053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:04:53.676 [2024-07-15 18:55:20.921825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.934 [2024-07-15 18:55:21.065319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.934 [2024-07-15 18:55:21.123832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.500 18:55:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.759 [2024-07-15 18:55:21.806699] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:54.759 [2024-07-15 18:55:21.807192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:04:54.759 [2024-07-15 18:55:21.943094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.017 [2024-07-15 18:55:22.063029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.017 [2024-07-15 18:55:22.063127] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.017 [2024-07-15 18:55:22.063142] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.017 [2024-07-15 18:55:22.063151] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.017 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:55.017 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.017 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:55.017 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59153 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59153 ']' 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59153 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59153 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.018 killing process with pid 59153 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59153' 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59153 00:04:55.018 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59153 00:04:55.583 00:04:55.583 real 0m1.875s 00:04:55.583 user 0m2.196s 00:04:55.583 sys 0m0.425s 00:04:55.583 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.583 18:55:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 ************************************ 00:04:55.583 END TEST exit_on_failed_rpc_init 00:04:55.583 ************************************ 00:04:55.583 18:55:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.583 18:55:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.583 00:04:55.583 real 0m14.764s 00:04:55.583 user 0m14.108s 00:04:55.583 sys 0m1.662s 00:04:55.583 18:55:22 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.583 18:55:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 ************************************ 00:04:55.583 END TEST skip_rpc 00:04:55.583 ************************************ 00:04:55.584 18:55:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.584 18:55:22 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.584 18:55:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.584 18:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.584 18:55:22 -- common/autotest_common.sh@10 -- # set +x 00:04:55.584 ************************************ 00:04:55.584 START TEST rpc_client 00:04:55.584 ************************************ 00:04:55.584 18:55:22 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.584 * Looking for test storage... 00:04:55.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:55.584 18:55:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:55.584 OK 00:04:55.584 18:55:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.584 00:04:55.584 real 0m0.108s 00:04:55.584 user 0m0.049s 00:04:55.584 sys 0m0.063s 00:04:55.584 18:55:22 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.584 18:55:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.584 ************************************ 00:04:55.584 END TEST rpc_client 00:04:55.584 ************************************ 00:04:55.584 18:55:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.584 18:55:22 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.584 18:55:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.584 18:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.584 18:55:22 -- common/autotest_common.sh@10 -- # set +x 00:04:55.584 ************************************ 00:04:55.584 START TEST json_config 00:04:55.584 ************************************ 00:04:55.584 18:55:22 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.841 18:55:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.841 18:55:22 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.841 18:55:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.842 18:55:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.842 18:55:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.842 18:55:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.842 18:55:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.842 18:55:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.842 18:55:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.842 18:55:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.842 18:55:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.842 INFO: JSON configuration test init 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.842 18:55:22 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.842 18:55:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.842 18:55:22 json_config -- json_config/common.sh@10 -- # shift 00:04:55.842 18:55:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.842 18:55:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.842 18:55:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.842 18:55:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.842 18:55:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.842 18:55:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59289 00:04:55.842 Waiting for target to run... 00:04:55.842 18:55:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.842 18:55:22 json_config -- json_config/common.sh@25 -- # waitforlisten 59289 /var/tmp/spdk_tgt.sock 00:04:55.842 18:55:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 59289 ']' 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.842 18:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.842 [2024-07-15 18:55:22.983000] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:04:55.842 [2024-07-15 18:55:22.983090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:04:56.416 [2024-07-15 18:55:23.415943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.416 [2024-07-15 18:55:23.531679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.050 18:55:23 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.051 18:55:23 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:57.051 00:04:57.051 18:55:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.051 18:55:23 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:57.051 18:55:23 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:57.051 18:55:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.051 18:55:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.051 18:55:23 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:57.051 18:55:23 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:57.051 18:55:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.051 18:55:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.051 18:55:24 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.051 18:55:24 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:57.051 18:55:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:57.051 [2024-07-15 18:55:24.304172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:57.309 18:55:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.309 18:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:57.309 18:55:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.309 18:55:24 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:57.567 18:55:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.567 18:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:57.567 18:55:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.567 18:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:57.567 18:55:24 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.567 18:55:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.825 MallocForNvmf0 00:04:57.825 18:55:25 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.825 18:55:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:58.083 MallocForNvmf1 00:04:58.083 18:55:25 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:58.083 18:55:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:58.341 [2024-07-15 18:55:25.563833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.341 18:55:25 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.341 18:55:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.598 18:55:25 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.598 18:55:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.857 18:55:26 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.857 18:55:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.115 18:55:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.115 18:55:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.374 [2024-07-15 18:55:26.560417] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:59.374 18:55:26 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:59.374 18:55:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.374 18:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 18:55:26 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:59.374 18:55:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.374 18:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.633 18:55:26 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:59.633 18:55:26 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.633 18:55:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.892 MallocBdevForConfigChangeCheck 00:04:59.892 18:55:26 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:59.892 18:55:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.892 18:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.892 18:55:26 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:59.892 18:55:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.151 INFO: shutting down applications... 00:05:00.151 18:55:27 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:00.151 18:55:27 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:00.151 18:55:27 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:00.151 18:55:27 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:00.151 18:55:27 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:00.719 Calling clear_iscsi_subsystem 00:05:00.719 Calling clear_nvmf_subsystem 00:05:00.719 Calling clear_nbd_subsystem 00:05:00.719 Calling clear_ublk_subsystem 00:05:00.719 Calling clear_vhost_blk_subsystem 00:05:00.719 Calling clear_vhost_scsi_subsystem 00:05:00.719 Calling clear_bdev_subsystem 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:00.719 18:55:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:00.978 18:55:28 json_config -- json_config/json_config.sh@345 -- # break 00:05:00.978 18:55:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:00.978 18:55:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:00.978 18:55:28 json_config -- json_config/common.sh@31 -- # local app=target 00:05:00.978 18:55:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.978 18:55:28 json_config -- json_config/common.sh@35 -- # [[ -n 59289 ]] 00:05:00.978 18:55:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59289 00:05:00.978 18:55:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.978 18:55:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.978 18:55:28 json_config -- json_config/common.sh@41 -- # kill -0 59289 00:05:00.978 18:55:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.641 18:55:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.641 18:55:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.641 18:55:28 json_config -- json_config/common.sh@41 -- # kill -0 59289 00:05:01.641 18:55:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.641 18:55:28 json_config -- json_config/common.sh@43 -- # break 00:05:01.641 18:55:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.641 18:55:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.641 SPDK target shutdown done 00:05:01.641 18:55:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:01.641 INFO: relaunching applications... 00:05:01.641 18:55:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:01.641 18:55:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.641 18:55:28 json_config -- json_config/common.sh@10 -- # shift 00:05:01.641 18:55:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.641 18:55:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.641 18:55:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.641 18:55:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.641 18:55:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.641 Waiting for target to run... 00:05:01.641 18:55:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59485 00:05:01.641 18:55:28 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:01.641 18:55:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.641 18:55:28 json_config -- json_config/common.sh@25 -- # waitforlisten 59485 /var/tmp/spdk_tgt.sock 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 59485 ']' 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.641 18:55:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.641 [2024-07-15 18:55:28.706209] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:01.642 [2024-07-15 18:55:28.707071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59485 ] 00:05:02.216 [2024-07-15 18:55:29.217441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.216 [2024-07-15 18:55:29.329620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.216 [2024-07-15 18:55:29.457592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.474 [2024-07-15 18:55:29.677577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.474 [2024-07-15 18:55:29.709699] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.474 18:55:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.474 18:55:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:02.474 00:05:02.474 18:55:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.474 18:55:29 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:02.474 INFO: Checking if target configuration is the same... 00:05:02.474 18:55:29 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:02.474 18:55:29 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:02.474 18:55:29 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:02.474 18:55:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.474 + '[' 2 -ne 2 ']' 00:05:02.474 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:02.474 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:02.474 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:02.474 +++ basename /dev/fd/62 00:05:02.474 ++ mktemp /tmp/62.XXX 00:05:02.474 + tmp_file_1=/tmp/62.40e 00:05:02.474 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:02.732 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:02.732 + tmp_file_2=/tmp/spdk_tgt_config.json.eIt 00:05:02.732 + ret=0 00:05:02.732 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:02.992 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:02.992 + diff -u /tmp/62.40e /tmp/spdk_tgt_config.json.eIt 00:05:02.992 INFO: JSON config files are the same 00:05:02.992 + echo 'INFO: JSON config files are the same' 00:05:02.992 + rm /tmp/62.40e /tmp/spdk_tgt_config.json.eIt 00:05:02.992 + exit 0 00:05:02.992 INFO: changing configuration and checking if this can be detected... 00:05:02.992 18:55:30 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:02.992 18:55:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:02.992 18:55:30 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:02.992 18:55:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.251 18:55:30 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.251 18:55:30 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:03.251 18:55:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.251 + '[' 2 -ne 2 ']' 00:05:03.251 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:03.251 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:03.251 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:03.251 +++ basename /dev/fd/62 00:05:03.251 ++ mktemp /tmp/62.XXX 00:05:03.251 + tmp_file_1=/tmp/62.Y0G 00:05:03.510 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.510 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.510 + tmp_file_2=/tmp/spdk_tgt_config.json.FwA 00:05:03.510 + ret=0 00:05:03.510 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:03.769 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:03.769 + diff -u /tmp/62.Y0G /tmp/spdk_tgt_config.json.FwA 00:05:03.769 + ret=1 00:05:03.769 + echo '=== Start of file: /tmp/62.Y0G ===' 00:05:03.769 + cat /tmp/62.Y0G 00:05:03.769 + echo '=== End of file: /tmp/62.Y0G ===' 00:05:03.769 + echo '' 00:05:03.769 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FwA ===' 00:05:03.769 + cat /tmp/spdk_tgt_config.json.FwA 00:05:03.769 + echo '=== End of file: /tmp/spdk_tgt_config.json.FwA ===' 00:05:03.769 + echo '' 00:05:03.769 + rm /tmp/62.Y0G /tmp/spdk_tgt_config.json.FwA 00:05:03.769 + exit 1 00:05:03.769 INFO: configuration change detected. 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@317 -- # [[ -n 59485 ]] 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:03.770 18:55:30 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.770 18:55:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.770 18:55:31 json_config -- json_config/json_config.sh@323 -- # killprocess 59485 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@948 -- # '[' -z 59485 ']' 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@952 -- # kill -0 59485 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@953 -- # uname 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59485 00:05:03.770 killing process with pid 59485 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59485' 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@967 -- # kill 59485 00:05:03.770 18:55:31 json_config -- common/autotest_common.sh@972 -- # wait 59485 00:05:04.029 18:55:31 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.029 18:55:31 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:04.029 18:55:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.029 18:55:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.289 INFO: Success 00:05:04.289 18:55:31 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:04.289 18:55:31 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:04.289 ************************************ 00:05:04.289 END TEST json_config 00:05:04.289 ************************************ 00:05:04.289 00:05:04.289 real 0m8.523s 00:05:04.289 user 0m12.110s 00:05:04.289 sys 0m1.919s 00:05:04.289 18:55:31 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.289 18:55:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.289 18:55:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.289 18:55:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.289 18:55:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.289 18:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.289 18:55:31 -- common/autotest_common.sh@10 -- # set +x 00:05:04.289 ************************************ 00:05:04.289 START TEST json_config_extra_key 00:05:04.289 ************************************ 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.289 18:55:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.289 18:55:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.289 18:55:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.289 18:55:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.289 18:55:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.289 18:55:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.289 18:55:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:04.289 18:55:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:04.289 18:55:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:04.289 INFO: launching applications... 00:05:04.289 18:55:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59626 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.289 Waiting for target to run... 00:05:04.289 18:55:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59626 /var/tmp/spdk_tgt.sock 00:05:04.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59626 ']' 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.289 18:55:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:04.289 [2024-07-15 18:55:31.569884] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:04.289 [2024-07-15 18:55:31.569997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:05:04.859 [2024-07-15 18:55:32.008536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.859 [2024-07-15 18:55:32.105710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.859 [2024-07-15 18:55:32.126789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.427 00:05:05.427 INFO: shutting down applications... 00:05:05.427 18:55:32 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.428 18:55:32 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:05.428 18:55:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:05.428 18:55:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59626 ]] 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59626 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59626 00:05:05.428 18:55:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.996 SPDK target shutdown done 00:05:05.996 Success 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59626 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.996 18:55:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.996 18:55:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:05.996 ************************************ 00:05:05.996 END TEST json_config_extra_key 00:05:05.996 ************************************ 00:05:05.996 00:05:05.996 real 0m1.655s 00:05:05.996 user 0m1.574s 00:05:05.996 sys 0m0.450s 00:05:05.996 18:55:33 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.996 18:55:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.996 18:55:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.996 18:55:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.996 18:55:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.996 18:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.996 18:55:33 -- common/autotest_common.sh@10 -- # set +x 00:05:05.996 ************************************ 00:05:05.996 START TEST alias_rpc 00:05:05.996 ************************************ 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.997 * Looking for test storage... 00:05:05.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:05.997 18:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.997 18:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59690 00:05:05.997 18:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.997 18:55:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59690 00:05:05.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59690 ']' 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.997 18:55:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.997 [2024-07-15 18:55:33.275507] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:05.997 [2024-07-15 18:55:33.277176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59690 ] 00:05:06.255 [2024-07-15 18:55:33.410825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.255 [2024-07-15 18:55:33.530833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.514 [2024-07-15 18:55:33.584863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.092 18:55:34 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.092 18:55:34 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:07.092 18:55:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:07.357 18:55:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59690 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59690 ']' 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59690 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59690 00:05:07.357 killing process with pid 59690 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59690' 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@967 -- # kill 59690 00:05:07.357 18:55:34 alias_rpc -- common/autotest_common.sh@972 -- # wait 59690 00:05:07.925 ************************************ 00:05:07.925 END TEST alias_rpc 00:05:07.925 ************************************ 00:05:07.925 00:05:07.925 real 0m1.986s 00:05:07.925 user 0m2.204s 00:05:07.925 sys 0m0.424s 00:05:07.925 18:55:35 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.925 18:55:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.925 18:55:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.925 18:55:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:07.925 18:55:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:07.925 18:55:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.925 18:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.925 18:55:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.925 ************************************ 00:05:07.925 START TEST spdkcli_tcp 00:05:07.925 ************************************ 00:05:07.925 18:55:35 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:08.184 * Looking for test storage... 00:05:08.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59766 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59766 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59766 ']' 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.184 18:55:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.184 18:55:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.184 [2024-07-15 18:55:35.332352] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:08.184 [2024-07-15 18:55:35.332451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:05:08.184 [2024-07-15 18:55:35.468297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.442 [2024-07-15 18:55:35.626975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.442 [2024-07-15 18:55:35.626996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.442 [2024-07-15 18:55:35.708783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:09.378 18:55:36 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.378 18:55:36 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:09.378 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59783 00:05:09.378 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:09.378 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.378 [ 00:05:09.378 "bdev_malloc_delete", 00:05:09.378 "bdev_malloc_create", 00:05:09.378 "bdev_null_resize", 00:05:09.378 "bdev_null_delete", 00:05:09.378 "bdev_null_create", 00:05:09.378 "bdev_nvme_cuse_unregister", 00:05:09.378 "bdev_nvme_cuse_register", 00:05:09.378 "bdev_opal_new_user", 00:05:09.378 "bdev_opal_set_lock_state", 00:05:09.378 "bdev_opal_delete", 00:05:09.378 "bdev_opal_get_info", 00:05:09.378 "bdev_opal_create", 00:05:09.378 "bdev_nvme_opal_revert", 00:05:09.378 "bdev_nvme_opal_init", 00:05:09.378 "bdev_nvme_send_cmd", 00:05:09.378 "bdev_nvme_get_path_iostat", 00:05:09.378 "bdev_nvme_get_mdns_discovery_info", 00:05:09.378 "bdev_nvme_stop_mdns_discovery", 00:05:09.378 "bdev_nvme_start_mdns_discovery", 00:05:09.378 "bdev_nvme_set_multipath_policy", 00:05:09.378 "bdev_nvme_set_preferred_path", 00:05:09.378 "bdev_nvme_get_io_paths", 00:05:09.378 "bdev_nvme_remove_error_injection", 00:05:09.378 "bdev_nvme_add_error_injection", 00:05:09.378 "bdev_nvme_get_discovery_info", 00:05:09.378 "bdev_nvme_stop_discovery", 00:05:09.378 "bdev_nvme_start_discovery", 00:05:09.378 "bdev_nvme_get_controller_health_info", 00:05:09.378 "bdev_nvme_disable_controller", 00:05:09.378 "bdev_nvme_enable_controller", 00:05:09.378 "bdev_nvme_reset_controller", 00:05:09.378 "bdev_nvme_get_transport_statistics", 00:05:09.378 "bdev_nvme_apply_firmware", 00:05:09.378 "bdev_nvme_detach_controller", 00:05:09.378 "bdev_nvme_get_controllers", 00:05:09.378 "bdev_nvme_attach_controller", 00:05:09.378 "bdev_nvme_set_hotplug", 00:05:09.378 "bdev_nvme_set_options", 00:05:09.378 "bdev_passthru_delete", 00:05:09.378 "bdev_passthru_create", 00:05:09.378 "bdev_lvol_set_parent_bdev", 00:05:09.378 "bdev_lvol_set_parent", 00:05:09.378 "bdev_lvol_check_shallow_copy", 00:05:09.378 "bdev_lvol_start_shallow_copy", 00:05:09.378 "bdev_lvol_grow_lvstore", 00:05:09.378 "bdev_lvol_get_lvols", 00:05:09.378 "bdev_lvol_get_lvstores", 00:05:09.378 "bdev_lvol_delete", 00:05:09.378 "bdev_lvol_set_read_only", 00:05:09.378 "bdev_lvol_resize", 00:05:09.378 "bdev_lvol_decouple_parent", 00:05:09.378 "bdev_lvol_inflate", 00:05:09.378 "bdev_lvol_rename", 00:05:09.378 "bdev_lvol_clone_bdev", 00:05:09.378 "bdev_lvol_clone", 00:05:09.378 "bdev_lvol_snapshot", 00:05:09.378 "bdev_lvol_create", 00:05:09.378 "bdev_lvol_delete_lvstore", 00:05:09.378 "bdev_lvol_rename_lvstore", 00:05:09.378 "bdev_lvol_create_lvstore", 00:05:09.378 "bdev_raid_set_options", 00:05:09.378 "bdev_raid_remove_base_bdev", 00:05:09.378 "bdev_raid_add_base_bdev", 00:05:09.378 "bdev_raid_delete", 00:05:09.378 "bdev_raid_create", 00:05:09.378 "bdev_raid_get_bdevs", 00:05:09.378 "bdev_error_inject_error", 00:05:09.378 "bdev_error_delete", 00:05:09.379 "bdev_error_create", 00:05:09.379 "bdev_split_delete", 00:05:09.379 "bdev_split_create", 00:05:09.379 "bdev_delay_delete", 00:05:09.379 "bdev_delay_create", 00:05:09.379 "bdev_delay_update_latency", 00:05:09.379 "bdev_zone_block_delete", 00:05:09.379 "bdev_zone_block_create", 00:05:09.379 "blobfs_create", 00:05:09.379 "blobfs_detect", 00:05:09.379 "blobfs_set_cache_size", 00:05:09.379 "bdev_aio_delete", 00:05:09.379 "bdev_aio_rescan", 00:05:09.379 "bdev_aio_create", 00:05:09.379 "bdev_ftl_set_property", 00:05:09.379 "bdev_ftl_get_properties", 00:05:09.379 "bdev_ftl_get_stats", 00:05:09.379 "bdev_ftl_unmap", 00:05:09.379 "bdev_ftl_unload", 00:05:09.379 "bdev_ftl_delete", 00:05:09.379 "bdev_ftl_load", 00:05:09.379 "bdev_ftl_create", 00:05:09.379 "bdev_virtio_attach_controller", 00:05:09.379 "bdev_virtio_scsi_get_devices", 00:05:09.379 "bdev_virtio_detach_controller", 00:05:09.379 "bdev_virtio_blk_set_hotplug", 00:05:09.379 "bdev_iscsi_delete", 00:05:09.379 "bdev_iscsi_create", 00:05:09.379 "bdev_iscsi_set_options", 00:05:09.379 "bdev_uring_delete", 00:05:09.379 "bdev_uring_rescan", 00:05:09.379 "bdev_uring_create", 00:05:09.379 "accel_error_inject_error", 00:05:09.379 "ioat_scan_accel_module", 00:05:09.379 "dsa_scan_accel_module", 00:05:09.379 "iaa_scan_accel_module", 00:05:09.379 "keyring_file_remove_key", 00:05:09.379 "keyring_file_add_key", 00:05:09.379 "keyring_linux_set_options", 00:05:09.379 "iscsi_get_histogram", 00:05:09.379 "iscsi_enable_histogram", 00:05:09.379 "iscsi_set_options", 00:05:09.379 "iscsi_get_auth_groups", 00:05:09.379 "iscsi_auth_group_remove_secret", 00:05:09.379 "iscsi_auth_group_add_secret", 00:05:09.379 "iscsi_delete_auth_group", 00:05:09.379 "iscsi_create_auth_group", 00:05:09.379 "iscsi_set_discovery_auth", 00:05:09.379 "iscsi_get_options", 00:05:09.379 "iscsi_target_node_request_logout", 00:05:09.379 "iscsi_target_node_set_redirect", 00:05:09.379 "iscsi_target_node_set_auth", 00:05:09.379 "iscsi_target_node_add_lun", 00:05:09.379 "iscsi_get_stats", 00:05:09.379 "iscsi_get_connections", 00:05:09.379 "iscsi_portal_group_set_auth", 00:05:09.379 "iscsi_start_portal_group", 00:05:09.379 "iscsi_delete_portal_group", 00:05:09.379 "iscsi_create_portal_group", 00:05:09.379 "iscsi_get_portal_groups", 00:05:09.379 "iscsi_delete_target_node", 00:05:09.379 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.379 "iscsi_target_node_add_pg_ig_maps", 00:05:09.379 "iscsi_create_target_node", 00:05:09.379 "iscsi_get_target_nodes", 00:05:09.379 "iscsi_delete_initiator_group", 00:05:09.379 "iscsi_initiator_group_remove_initiators", 00:05:09.379 "iscsi_initiator_group_add_initiators", 00:05:09.379 "iscsi_create_initiator_group", 00:05:09.379 "iscsi_get_initiator_groups", 00:05:09.379 "nvmf_set_crdt", 00:05:09.379 "nvmf_set_config", 00:05:09.379 "nvmf_set_max_subsystems", 00:05:09.379 "nvmf_stop_mdns_prr", 00:05:09.379 "nvmf_publish_mdns_prr", 00:05:09.379 "nvmf_subsystem_get_listeners", 00:05:09.379 "nvmf_subsystem_get_qpairs", 00:05:09.379 "nvmf_subsystem_get_controllers", 00:05:09.379 "nvmf_get_stats", 00:05:09.379 "nvmf_get_transports", 00:05:09.379 "nvmf_create_transport", 00:05:09.379 "nvmf_get_targets", 00:05:09.379 "nvmf_delete_target", 00:05:09.379 "nvmf_create_target", 00:05:09.379 "nvmf_subsystem_allow_any_host", 00:05:09.379 "nvmf_subsystem_remove_host", 00:05:09.379 "nvmf_subsystem_add_host", 00:05:09.379 "nvmf_ns_remove_host", 00:05:09.379 "nvmf_ns_add_host", 00:05:09.379 "nvmf_subsystem_remove_ns", 00:05:09.379 "nvmf_subsystem_add_ns", 00:05:09.379 "nvmf_subsystem_listener_set_ana_state", 00:05:09.379 "nvmf_discovery_get_referrals", 00:05:09.379 "nvmf_discovery_remove_referral", 00:05:09.379 "nvmf_discovery_add_referral", 00:05:09.379 "nvmf_subsystem_remove_listener", 00:05:09.379 "nvmf_subsystem_add_listener", 00:05:09.379 "nvmf_delete_subsystem", 00:05:09.379 "nvmf_create_subsystem", 00:05:09.379 "nvmf_get_subsystems", 00:05:09.379 "env_dpdk_get_mem_stats", 00:05:09.379 "nbd_get_disks", 00:05:09.379 "nbd_stop_disk", 00:05:09.379 "nbd_start_disk", 00:05:09.379 "ublk_recover_disk", 00:05:09.379 "ublk_get_disks", 00:05:09.379 "ublk_stop_disk", 00:05:09.379 "ublk_start_disk", 00:05:09.379 "ublk_destroy_target", 00:05:09.379 "ublk_create_target", 00:05:09.379 "virtio_blk_create_transport", 00:05:09.379 "virtio_blk_get_transports", 00:05:09.379 "vhost_controller_set_coalescing", 00:05:09.379 "vhost_get_controllers", 00:05:09.379 "vhost_delete_controller", 00:05:09.379 "vhost_create_blk_controller", 00:05:09.379 "vhost_scsi_controller_remove_target", 00:05:09.379 "vhost_scsi_controller_add_target", 00:05:09.379 "vhost_start_scsi_controller", 00:05:09.379 "vhost_create_scsi_controller", 00:05:09.379 "thread_set_cpumask", 00:05:09.379 "framework_get_governor", 00:05:09.379 "framework_get_scheduler", 00:05:09.379 "framework_set_scheduler", 00:05:09.379 "framework_get_reactors", 00:05:09.379 "thread_get_io_channels", 00:05:09.379 "thread_get_pollers", 00:05:09.379 "thread_get_stats", 00:05:09.379 "framework_monitor_context_switch", 00:05:09.379 "spdk_kill_instance", 00:05:09.379 "log_enable_timestamps", 00:05:09.379 "log_get_flags", 00:05:09.379 "log_clear_flag", 00:05:09.379 "log_set_flag", 00:05:09.379 "log_get_level", 00:05:09.379 "log_set_level", 00:05:09.379 "log_get_print_level", 00:05:09.379 "log_set_print_level", 00:05:09.379 "framework_enable_cpumask_locks", 00:05:09.379 "framework_disable_cpumask_locks", 00:05:09.379 "framework_wait_init", 00:05:09.379 "framework_start_init", 00:05:09.379 "scsi_get_devices", 00:05:09.379 "bdev_get_histogram", 00:05:09.379 "bdev_enable_histogram", 00:05:09.379 "bdev_set_qos_limit", 00:05:09.379 "bdev_set_qd_sampling_period", 00:05:09.379 "bdev_get_bdevs", 00:05:09.379 "bdev_reset_iostat", 00:05:09.379 "bdev_get_iostat", 00:05:09.379 "bdev_examine", 00:05:09.379 "bdev_wait_for_examine", 00:05:09.379 "bdev_set_options", 00:05:09.379 "notify_get_notifications", 00:05:09.379 "notify_get_types", 00:05:09.379 "accel_get_stats", 00:05:09.379 "accel_set_options", 00:05:09.379 "accel_set_driver", 00:05:09.379 "accel_crypto_key_destroy", 00:05:09.379 "accel_crypto_keys_get", 00:05:09.379 "accel_crypto_key_create", 00:05:09.379 "accel_assign_opc", 00:05:09.379 "accel_get_module_info", 00:05:09.379 "accel_get_opc_assignments", 00:05:09.379 "vmd_rescan", 00:05:09.379 "vmd_remove_device", 00:05:09.379 "vmd_enable", 00:05:09.379 "sock_get_default_impl", 00:05:09.379 "sock_set_default_impl", 00:05:09.379 "sock_impl_set_options", 00:05:09.379 "sock_impl_get_options", 00:05:09.379 "iobuf_get_stats", 00:05:09.379 "iobuf_set_options", 00:05:09.379 "framework_get_pci_devices", 00:05:09.379 "framework_get_config", 00:05:09.379 "framework_get_subsystems", 00:05:09.379 "trace_get_info", 00:05:09.379 "trace_get_tpoint_group_mask", 00:05:09.379 "trace_disable_tpoint_group", 00:05:09.379 "trace_enable_tpoint_group", 00:05:09.379 "trace_clear_tpoint_mask", 00:05:09.379 "trace_set_tpoint_mask", 00:05:09.379 "keyring_get_keys", 00:05:09.379 "spdk_get_version", 00:05:09.379 "rpc_get_methods" 00:05:09.379 ] 00:05:09.379 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.379 18:55:36 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.379 18:55:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.638 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.638 18:55:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59766 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59766 ']' 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59766 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59766 00:05:09.638 killing process with pid 59766 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59766' 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59766 00:05:09.638 18:55:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59766 00:05:10.204 ************************************ 00:05:10.204 END TEST spdkcli_tcp 00:05:10.204 ************************************ 00:05:10.204 00:05:10.204 real 0m2.107s 00:05:10.204 user 0m3.821s 00:05:10.204 sys 0m0.604s 00:05:10.204 18:55:37 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.204 18:55:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.204 18:55:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.204 18:55:37 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.204 18:55:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.204 18:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.204 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:10.204 ************************************ 00:05:10.204 START TEST dpdk_mem_utility 00:05:10.204 ************************************ 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.204 * Looking for test storage... 00:05:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:10.204 18:55:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:10.204 18:55:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59857 00:05:10.204 18:55:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.204 18:55:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59857 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59857 ']' 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.204 18:55:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.204 [2024-07-15 18:55:37.455766] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:10.204 [2024-07-15 18:55:37.455883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:05:10.462 [2024-07-15 18:55:37.590521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.462 [2024-07-15 18:55:37.727370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.720 [2024-07-15 18:55:37.781671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.286 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.286 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:11.286 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.286 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.286 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.286 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.286 { 00:05:11.286 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.286 } 00:05:11.286 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.286 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.286 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.286 1 heaps totaling size 814.000000 MiB 00:05:11.286 size: 814.000000 MiB heap id: 0 00:05:11.286 end heaps---------- 00:05:11.286 8 mempools totaling size 598.116089 MiB 00:05:11.286 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.286 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.286 size: 84.521057 MiB name: bdev_io_59857 00:05:11.286 size: 51.011292 MiB name: evtpool_59857 00:05:11.286 size: 50.003479 MiB name: msgpool_59857 00:05:11.286 size: 21.763794 MiB name: PDU_Pool 00:05:11.286 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.286 size: 0.026123 MiB name: Session_Pool 00:05:11.286 end mempools------- 00:05:11.286 6 memzones totaling size 4.142822 MiB 00:05:11.286 size: 1.000366 MiB name: RG_ring_0_59857 00:05:11.286 size: 1.000366 MiB name: RG_ring_1_59857 00:05:11.286 size: 1.000366 MiB name: RG_ring_4_59857 00:05:11.286 size: 1.000366 MiB name: RG_ring_5_59857 00:05:11.286 size: 0.125366 MiB name: RG_ring_2_59857 00:05:11.286 size: 0.015991 MiB name: RG_ring_3_59857 00:05:11.286 end memzones------- 00:05:11.286 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.545 heap id: 0 total size: 814.000000 MiB number of busy elements: 291 number of free elements: 15 00:05:11.545 list of free elements. size: 12.473572 MiB 00:05:11.545 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.545 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.545 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.545 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.545 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.545 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.545 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.545 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.545 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:11.545 element at address: 0x20001aa00000 with size: 0.570618 MiB 00:05:11.545 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:11.545 element at address: 0x200000800000 with size: 0.486328 MiB 00:05:11.545 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.545 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:11.545 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:11.545 list of standard malloc elements. size: 199.263855 MiB 00:05:11.545 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.545 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.545 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.545 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.545 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.545 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.545 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.545 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.545 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.545 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:11.545 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.546 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.547 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.547 list of memzone associated elements. size: 602.262573 MiB 00:05:11.547 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.547 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.547 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.547 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.547 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.547 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59857_0 00:05:11.547 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.547 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59857_0 00:05:11.547 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.547 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59857_0 00:05:11.547 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.547 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.547 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.547 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.547 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.547 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59857 00:05:11.547 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.547 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59857 00:05:11.547 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.547 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59857 00:05:11.547 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.547 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.547 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.547 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.547 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.547 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.547 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.547 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.547 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.547 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59857 00:05:11.547 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.547 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59857 00:05:11.547 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.547 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59857 00:05:11.547 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.547 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59857 00:05:11.547 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:11.547 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59857 00:05:11.547 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.547 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.547 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.547 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.547 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.547 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.547 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:11.547 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59857 00:05:11.547 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.547 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.547 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:11.547 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.547 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:11.547 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59857 00:05:11.547 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:11.547 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.547 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:11.547 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59857 00:05:11.547 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:11.547 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59857 00:05:11.547 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:11.547 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.547 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.547 18:55:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59857 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59857 ']' 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59857 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59857 00:05:11.547 killing process with pid 59857 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59857' 00:05:11.547 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59857 00:05:11.548 18:55:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59857 00:05:12.113 ************************************ 00:05:12.113 END TEST dpdk_mem_utility 00:05:12.113 ************************************ 00:05:12.113 00:05:12.113 real 0m1.913s 00:05:12.113 user 0m2.133s 00:05:12.113 sys 0m0.404s 00:05:12.113 18:55:39 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.113 18:55:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.113 18:55:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.113 18:55:39 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.113 18:55:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.113 18:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.113 18:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:12.113 ************************************ 00:05:12.113 START TEST event 00:05:12.113 ************************************ 00:05:12.113 18:55:39 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.113 * Looking for test storage... 00:05:12.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.113 18:55:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:12.113 18:55:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.113 18:55:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.113 18:55:39 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:12.113 18:55:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.113 18:55:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.113 ************************************ 00:05:12.113 START TEST event_perf 00:05:12.113 ************************************ 00:05:12.113 18:55:39 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.113 Running I/O for 1 seconds...[2024-07-15 18:55:39.356331] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:12.113 [2024-07-15 18:55:39.356574] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:05:12.369 [2024-07-15 18:55:39.492152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.625 [2024-07-15 18:55:39.662922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.625 Running I/O for 1 seconds...[2024-07-15 18:55:39.663038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.625 [2024-07-15 18:55:39.663116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.625 [2024-07-15 18:55:39.663121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.558 00:05:13.558 lcore 0: 115205 00:05:13.558 lcore 1: 115206 00:05:13.558 lcore 2: 115209 00:05:13.558 lcore 3: 115202 00:05:13.558 done. 00:05:13.558 00:05:13.558 real 0m1.451s 00:05:13.558 user 0m4.240s 00:05:13.558 sys 0m0.083s 00:05:13.558 18:55:40 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.558 18:55:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.558 ************************************ 00:05:13.558 END TEST event_perf 00:05:13.558 ************************************ 00:05:13.558 18:55:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:13.558 18:55:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.558 18:55:40 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:13.558 18:55:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.558 18:55:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.817 ************************************ 00:05:13.817 START TEST event_reactor 00:05:13.817 ************************************ 00:05:13.817 18:55:40 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.817 [2024-07-15 18:55:40.867138] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:13.817 [2024-07-15 18:55:40.867403] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:05:13.817 [2024-07-15 18:55:41.001576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.076 [2024-07-15 18:55:41.156512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.013 test_start 00:05:15.013 oneshot 00:05:15.013 tick 100 00:05:15.013 tick 100 00:05:15.013 tick 250 00:05:15.013 tick 100 00:05:15.013 tick 100 00:05:15.013 tick 100 00:05:15.013 tick 250 00:05:15.013 tick 500 00:05:15.013 tick 100 00:05:15.013 tick 100 00:05:15.013 tick 250 00:05:15.013 tick 100 00:05:15.013 tick 100 00:05:15.013 test_end 00:05:15.013 ************************************ 00:05:15.013 END TEST event_reactor 00:05:15.013 ************************************ 00:05:15.013 00:05:15.013 real 0m1.419s 00:05:15.013 user 0m1.244s 00:05:15.013 sys 0m0.068s 00:05:15.013 18:55:42 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.013 18:55:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:15.272 18:55:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.272 18:55:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.272 18:55:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:15.272 18:55:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.272 18:55:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.272 ************************************ 00:05:15.272 START TEST event_reactor_perf 00:05:15.272 ************************************ 00:05:15.272 18:55:42 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.272 [2024-07-15 18:55:42.345221] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:15.272 [2024-07-15 18:55:42.345634] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:05:15.272 [2024-07-15 18:55:42.480374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.530 [2024-07-15 18:55:42.616641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.466 test_start 00:05:16.466 test_end 00:05:16.466 Performance: 371989 events per second 00:05:16.466 00:05:16.466 real 0m1.412s 00:05:16.466 user 0m1.238s 00:05:16.466 sys 0m0.066s 00:05:16.466 18:55:43 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.466 18:55:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.466 ************************************ 00:05:16.466 END TEST event_reactor_perf 00:05:16.466 ************************************ 00:05:16.723 18:55:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.723 18:55:43 event -- event/event.sh@49 -- # uname -s 00:05:16.723 18:55:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.724 18:55:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.724 18:55:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.724 18:55:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.724 18:55:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.724 ************************************ 00:05:16.724 START TEST event_scheduler 00:05:16.724 ************************************ 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.724 * Looking for test storage... 00:05:16.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:16.724 18:55:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.724 18:55:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60070 00:05:16.724 18:55:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.724 18:55:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.724 18:55:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60070 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60070 ']' 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.724 18:55:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.724 [2024-07-15 18:55:43.946682] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:16.724 [2024-07-15 18:55:43.946975] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:05:16.981 [2024-07-15 18:55:44.089862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.981 [2024-07-15 18:55:44.253162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.981 [2024-07-15 18:55:44.253343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.981 [2024-07-15 18:55:44.253521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.981 [2024-07-15 18:55:44.255049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:17.916 18:55:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.916 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.916 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.916 POWER: Cannot set governor of lcore 0 to performance 00:05:17.916 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.916 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.916 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.916 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.916 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:17.916 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:17.916 POWER: Unable to set Power Management Environment for lcore 0 00:05:17.916 [2024-07-15 18:55:44.983408] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:17.916 [2024-07-15 18:55:44.983425] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:17.916 [2024-07-15 18:55:44.983434] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:17.916 [2024-07-15 18:55:44.983448] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:17.916 [2024-07-15 18:55:44.983456] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:17.916 [2024-07-15 18:55:44.983465] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 [2024-07-15 18:55:45.060972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.916 [2024-07-15 18:55:45.106362] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.916 18:55:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.916 18:55:45 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.916 18:55:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 ************************************ 00:05:17.916 START TEST scheduler_create_thread 00:05:17.916 ************************************ 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 2 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 3 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 4 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 5 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 6 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 7 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 8 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 9 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.916 10 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.916 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.175 18:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.550 18:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.550 18:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:19.550 18:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:19.550 18:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.550 18:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.486 ************************************ 00:05:20.486 END TEST scheduler_create_thread 00:05:20.486 ************************************ 00:05:20.486 18:55:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.486 00:05:20.486 real 0m2.614s 00:05:20.486 user 0m0.016s 00:05:20.486 sys 0m0.005s 00:05:20.486 18:55:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.486 18:55:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.486 18:55:47 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:20.486 18:55:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.486 18:55:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60070 00:05:20.486 18:55:47 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60070 ']' 00:05:20.486 18:55:47 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60070 00:05:20.486 18:55:47 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:20.744 18:55:47 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.744 18:55:47 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60070 00:05:20.745 killing process with pid 60070 00:05:20.745 18:55:47 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:20.745 18:55:47 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:20.745 18:55:47 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60070' 00:05:20.745 18:55:47 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60070 00:05:20.745 18:55:47 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60070 00:05:21.003 [2024-07-15 18:55:48.211737] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:21.570 00:05:21.570 real 0m4.750s 00:05:21.570 user 0m8.809s 00:05:21.570 sys 0m0.424s 00:05:21.570 ************************************ 00:05:21.570 END TEST event_scheduler 00:05:21.570 ************************************ 00:05:21.570 18:55:48 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.570 18:55:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.570 18:55:48 event -- common/autotest_common.sh@1142 -- # return 0 00:05:21.570 18:55:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:21.570 18:55:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:21.570 18:55:48 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.570 18:55:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.570 18:55:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.570 ************************************ 00:05:21.570 START TEST app_repeat 00:05:21.570 ************************************ 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:21.570 Process app_repeat pid: 60169 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60169 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60169' 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.570 spdk_app_start Round 0 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:21.570 18:55:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60169 /var/tmp/spdk-nbd.sock 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60169 ']' 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.570 18:55:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.570 [2024-07-15 18:55:48.649583] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:21.570 [2024-07-15 18:55:48.649992] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 00:05:21.570 [2024-07-15 18:55:48.786684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.829 [2024-07-15 18:55:48.943424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.829 [2024-07-15 18:55:48.943437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.829 [2024-07-15 18:55:49.005597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.762 18:55:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.762 18:55:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:22.762 18:55:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.762 Malloc0 00:05:22.762 18:55:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.020 Malloc1 00:05:23.280 18:55:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.280 18:55:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.538 /dev/nbd0 00:05:23.538 18:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.538 18:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.538 1+0 records in 00:05:23.538 1+0 records out 00:05:23.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320782 s, 12.8 MB/s 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.538 18:55:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.538 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.538 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.538 18:55:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.797 /dev/nbd1 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.797 1+0 records in 00:05:23.797 1+0 records out 00:05:23.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445218 s, 9.2 MB/s 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.797 18:55:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.797 18:55:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.056 { 00:05:24.056 "nbd_device": "/dev/nbd0", 00:05:24.056 "bdev_name": "Malloc0" 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "nbd_device": "/dev/nbd1", 00:05:24.056 "bdev_name": "Malloc1" 00:05:24.056 } 00:05:24.056 ]' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.056 { 00:05:24.056 "nbd_device": "/dev/nbd0", 00:05:24.056 "bdev_name": "Malloc0" 00:05:24.056 }, 00:05:24.056 { 00:05:24.056 "nbd_device": "/dev/nbd1", 00:05:24.056 "bdev_name": "Malloc1" 00:05:24.056 } 00:05:24.056 ]' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.056 /dev/nbd1' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.056 /dev/nbd1' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.056 256+0 records in 00:05:24.056 256+0 records out 00:05:24.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00875669 s, 120 MB/s 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.056 18:55:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.314 256+0 records in 00:05:24.314 256+0 records out 00:05:24.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226406 s, 46.3 MB/s 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.314 256+0 records in 00:05:24.314 256+0 records out 00:05:24.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318867 s, 32.9 MB/s 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.314 18:55:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.315 18:55:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.573 18:55:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.831 18:55:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.832 18:55:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.090 18:55:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.090 18:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.091 18:55:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.091 18:55:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.669 18:55:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.669 [2024-07-15 18:55:52.860554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.928 [2024-07-15 18:55:52.963974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.928 [2024-07-15 18:55:52.963986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.928 [2024-07-15 18:55:53.019910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.928 [2024-07-15 18:55:53.019993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.928 [2024-07-15 18:55:53.020008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.458 spdk_app_start Round 1 00:05:28.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.458 18:55:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.458 18:55:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.458 18:55:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60169 /var/tmp/spdk-nbd.sock 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60169 ']' 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.458 18:55:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.717 18:55:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.717 18:55:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:28.717 18:55:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.975 Malloc0 00:05:28.975 18:55:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.542 Malloc1 00:05:29.542 18:55:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.542 /dev/nbd0 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.542 18:55:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.542 18:55:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:29.542 18:55:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:29.542 18:55:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:29.542 18:55:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:29.542 18:55:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.801 1+0 records in 00:05:29.801 1+0 records out 00:05:29.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778672 s, 5.3 MB/s 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:29.801 18:55:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:29.801 18:55:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.801 18:55:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.801 18:55:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.059 /dev/nbd1 00:05:30.059 18:55:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.059 18:55:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.059 18:55:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.060 1+0 records in 00:05:30.060 1+0 records out 00:05:30.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392548 s, 10.4 MB/s 00:05:30.060 18:55:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.060 18:55:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.060 18:55:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.060 18:55:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.060 18:55:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.060 18:55:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.060 18:55:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.060 18:55:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.060 18:55:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.060 18:55:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.319 { 00:05:30.319 "nbd_device": "/dev/nbd0", 00:05:30.319 "bdev_name": "Malloc0" 00:05:30.319 }, 00:05:30.319 { 00:05:30.319 "nbd_device": "/dev/nbd1", 00:05:30.319 "bdev_name": "Malloc1" 00:05:30.319 } 00:05:30.319 ]' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.319 { 00:05:30.319 "nbd_device": "/dev/nbd0", 00:05:30.319 "bdev_name": "Malloc0" 00:05:30.319 }, 00:05:30.319 { 00:05:30.319 "nbd_device": "/dev/nbd1", 00:05:30.319 "bdev_name": "Malloc1" 00:05:30.319 } 00:05:30.319 ]' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.319 /dev/nbd1' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.319 /dev/nbd1' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.319 256+0 records in 00:05:30.319 256+0 records out 00:05:30.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101894 s, 103 MB/s 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.319 256+0 records in 00:05:30.319 256+0 records out 00:05:30.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250111 s, 41.9 MB/s 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.319 256+0 records in 00:05:30.319 256+0 records out 00:05:30.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298994 s, 35.1 MB/s 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.319 18:55:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.916 18:55:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.916 18:55:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.917 18:55:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.175 18:55:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.176 18:55:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.176 18:55:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.434 18:55:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.693 [2024-07-15 18:55:58.916867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.951 [2024-07-15 18:55:59.034189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.951 [2024-07-15 18:55:59.034192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.951 [2024-07-15 18:55:59.089682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.951 [2024-07-15 18:55:59.089782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.951 [2024-07-15 18:55:59.089797] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.482 spdk_app_start Round 2 00:05:34.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.482 18:56:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.482 18:56:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:34.482 18:56:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60169 /var/tmp/spdk-nbd.sock 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60169 ']' 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.482 18:56:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.740 18:56:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.741 18:56:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.741 18:56:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.000 Malloc0 00:05:35.000 18:56:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.306 Malloc1 00:05:35.306 18:56:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.306 18:56:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.564 /dev/nbd0 00:05:35.564 18:56:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.564 18:56:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.564 1+0 records in 00:05:35.564 1+0 records out 00:05:35.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197068 s, 20.8 MB/s 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.564 18:56:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.564 18:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.564 18:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.564 18:56:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.822 /dev/nbd1 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.822 1+0 records in 00:05:35.822 1+0 records out 00:05:35.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364849 s, 11.2 MB/s 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.822 18:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.822 18:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.081 18:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.081 { 00:05:36.082 "nbd_device": "/dev/nbd0", 00:05:36.082 "bdev_name": "Malloc0" 00:05:36.082 }, 00:05:36.082 { 00:05:36.082 "nbd_device": "/dev/nbd1", 00:05:36.082 "bdev_name": "Malloc1" 00:05:36.082 } 00:05:36.082 ]' 00:05:36.082 18:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.082 { 00:05:36.082 "nbd_device": "/dev/nbd0", 00:05:36.082 "bdev_name": "Malloc0" 00:05:36.082 }, 00:05:36.082 { 00:05:36.082 "nbd_device": "/dev/nbd1", 00:05:36.082 "bdev_name": "Malloc1" 00:05:36.082 } 00:05:36.082 ]' 00:05:36.082 18:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.340 /dev/nbd1' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.340 /dev/nbd1' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.340 256+0 records in 00:05:36.340 256+0 records out 00:05:36.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720046 s, 146 MB/s 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.340 256+0 records in 00:05:36.340 256+0 records out 00:05:36.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258337 s, 40.6 MB/s 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.340 256+0 records in 00:05:36.340 256+0 records out 00:05:36.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291475 s, 36.0 MB/s 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.340 18:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.599 18:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.857 18:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.115 18:56:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.116 18:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.116 18:56:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.116 18:56:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.116 18:56:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.374 18:56:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.632 [2024-07-15 18:56:04.848488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.891 [2024-07-15 18:56:04.960743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.891 [2024-07-15 18:56:04.960755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.892 [2024-07-15 18:56:05.014828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.892 [2024-07-15 18:56:05.014912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.892 [2024-07-15 18:56:05.014927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.455 18:56:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60169 /var/tmp/spdk-nbd.sock 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60169 ']' 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.455 18:56:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.713 18:56:07 event.app_repeat -- event/event.sh@39 -- # killprocess 60169 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60169 ']' 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60169 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60169 00:05:40.713 killing process with pid 60169 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60169' 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60169 00:05:40.713 18:56:07 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60169 00:05:40.973 spdk_app_start is called in Round 0. 00:05:40.973 Shutdown signal received, stop current app iteration 00:05:40.973 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:40.973 spdk_app_start is called in Round 1. 00:05:40.973 Shutdown signal received, stop current app iteration 00:05:40.973 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:40.973 spdk_app_start is called in Round 2. 00:05:40.973 Shutdown signal received, stop current app iteration 00:05:40.973 Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 reinitialization... 00:05:40.973 spdk_app_start is called in Round 3. 00:05:40.973 Shutdown signal received, stop current app iteration 00:05:40.973 18:56:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.973 18:56:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.973 00:05:40.973 real 0m19.628s 00:05:40.973 user 0m44.099s 00:05:40.973 sys 0m2.993s 00:05:40.973 18:56:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.973 18:56:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 ************************************ 00:05:40.973 END TEST app_repeat 00:05:40.973 ************************************ 00:05:41.232 18:56:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.232 18:56:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.232 18:56:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.232 18:56:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.232 18:56:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.232 18:56:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.232 ************************************ 00:05:41.232 START TEST cpu_locks 00:05:41.232 ************************************ 00:05:41.232 18:56:08 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.232 * Looking for test storage... 00:05:41.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.232 18:56:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.232 18:56:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.232 18:56:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.232 18:56:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.232 18:56:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.232 18:56:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.232 18:56:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.232 ************************************ 00:05:41.232 START TEST default_locks 00:05:41.232 ************************************ 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60613 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60613 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60613 ']' 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.232 18:56:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.232 [2024-07-15 18:56:08.459462] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:41.232 [2024-07-15 18:56:08.459575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60613 ] 00:05:41.492 [2024-07-15 18:56:08.590674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.492 [2024-07-15 18:56:08.706242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.492 [2024-07-15 18:56:08.761306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.427 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.427 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:42.427 18:56:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60613 00:05:42.427 18:56:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60613 00:05:42.427 18:56:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60613 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60613 ']' 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60613 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60613 00:05:42.684 killing process with pid 60613 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60613' 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60613 00:05:42.684 18:56:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60613 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60613 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60613 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:43.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.251 ERROR: process (pid: 60613) is no longer running 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60613 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60613 ']' 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60613) - No such process 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.251 00:05:43.251 real 0m1.925s 00:05:43.251 user 0m2.079s 00:05:43.251 sys 0m0.566s 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.251 18:56:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.251 ************************************ 00:05:43.251 END TEST default_locks 00:05:43.251 ************************************ 00:05:43.251 18:56:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.251 18:56:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.251 18:56:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.251 18:56:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.251 18:56:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.251 ************************************ 00:05:43.251 START TEST default_locks_via_rpc 00:05:43.251 ************************************ 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60665 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60665 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60665 ']' 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.251 18:56:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.251 [2024-07-15 18:56:10.433158] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:43.251 [2024-07-15 18:56:10.433255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60665 ] 00:05:43.509 [2024-07-15 18:56:10.572603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.509 [2024-07-15 18:56:10.699737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.509 [2024-07-15 18:56:10.758496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60665 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60665 00:05:44.443 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.701 18:56:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60665 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60665 ']' 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60665 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60665 00:05:44.702 killing process with pid 60665 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60665' 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60665 00:05:44.702 18:56:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60665 00:05:44.959 00:05:44.959 real 0m1.867s 00:05:44.959 user 0m1.990s 00:05:44.959 sys 0m0.566s 00:05:44.959 18:56:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.959 ************************************ 00:05:44.959 END TEST default_locks_via_rpc 00:05:44.959 ************************************ 00:05:44.959 18:56:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.216 18:56:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.216 18:56:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.216 18:56:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.216 18:56:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.216 18:56:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.216 ************************************ 00:05:45.216 START TEST non_locking_app_on_locked_coremask 00:05:45.216 ************************************ 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60711 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60711 /var/tmp/spdk.sock 00:05:45.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60711 ']' 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.216 18:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.216 [2024-07-15 18:56:12.349769] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:45.216 [2024-07-15 18:56:12.350059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60711 ] 00:05:45.216 [2024-07-15 18:56:12.487791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.474 [2024-07-15 18:56:12.606802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.474 [2024-07-15 18:56:12.660154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:46.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60727 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60727 /var/tmp/spdk2.sock 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60727 ']' 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.405 18:56:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.405 [2024-07-15 18:56:13.386731] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:46.406 [2024-07-15 18:56:13.386816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:05:46.406 [2024-07-15 18:56:13.528735] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.406 [2024-07-15 18:56:13.528789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.671 [2024-07-15 18:56:13.761895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.671 [2024-07-15 18:56:13.867555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.234 18:56:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.234 18:56:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.234 18:56:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60711 00:05:47.234 18:56:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60711 00:05:47.234 18:56:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60711 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60711 ']' 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60711 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60711 00:05:48.166 killing process with pid 60711 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60711' 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60711 00:05:48.166 18:56:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60711 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60727 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60727 ']' 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60727 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60727 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.101 killing process with pid 60727 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60727' 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60727 00:05:49.101 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60727 00:05:49.665 00:05:49.665 real 0m4.613s 00:05:49.665 user 0m5.099s 00:05:49.665 sys 0m1.087s 00:05:49.665 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.665 ************************************ 00:05:49.665 18:56:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.665 END TEST non_locking_app_on_locked_coremask 00:05:49.665 ************************************ 00:05:49.665 18:56:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.665 18:56:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.665 18:56:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.665 18:56:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.665 18:56:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.665 ************************************ 00:05:49.665 START TEST locking_app_on_unlocked_coremask 00:05:49.665 ************************************ 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60800 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60800 /var/tmp/spdk.sock 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60800 ']' 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.665 18:56:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.921 [2024-07-15 18:56:17.013824] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:49.921 [2024-07-15 18:56:17.014682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:05:49.921 [2024-07-15 18:56:17.152208] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.921 [2024-07-15 18:56:17.152322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.178 [2024-07-15 18:56:17.329866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.178 [2024-07-15 18:56:17.404423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60816 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60816 /var/tmp/spdk2.sock 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60816 ']' 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.743 18:56:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.001 [2024-07-15 18:56:18.053604] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:51.001 [2024-07-15 18:56:18.053916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:05:51.001 [2024-07-15 18:56:18.199246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.259 [2024-07-15 18:56:18.501366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.518 [2024-07-15 18:56:18.647803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.084 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.084 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.084 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60816 00:05:52.084 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.084 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60816 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60800 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60800 ']' 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60800 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.650 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60800 00:05:52.907 killing process with pid 60800 00:05:52.907 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.907 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.907 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60800' 00:05:52.907 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60800 00:05:52.907 18:56:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60800 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60816 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60816 ']' 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60816 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60816 00:05:53.511 killing process with pid 60816 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60816' 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60816 00:05:53.511 18:56:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60816 00:05:54.079 ************************************ 00:05:54.079 END TEST locking_app_on_unlocked_coremask 00:05:54.079 ************************************ 00:05:54.079 00:05:54.079 real 0m4.199s 00:05:54.079 user 0m4.507s 00:05:54.079 sys 0m1.226s 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.079 18:56:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.079 18:56:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.079 18:56:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.079 18:56:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.079 18:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.079 ************************************ 00:05:54.079 START TEST locking_app_on_locked_coremask 00:05:54.079 ************************************ 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60883 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60883 /var/tmp/spdk.sock 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60883 ']' 00:05:54.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.079 18:56:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.079 [2024-07-15 18:56:21.266198] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:54.079 [2024-07-15 18:56:21.266309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:05:54.337 [2024-07-15 18:56:21.399067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.337 [2024-07-15 18:56:21.525625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.337 [2024-07-15 18:56:21.580765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60899 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60899 /var/tmp/spdk2.sock 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60899 /var/tmp/spdk2.sock 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60899 /var/tmp/spdk2.sock 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60899 ']' 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.273 18:56:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.273 [2024-07-15 18:56:22.320631] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:55.273 [2024-07-15 18:56:22.320727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:05:55.273 [2024-07-15 18:56:22.467220] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60883 has claimed it. 00:05:55.273 [2024-07-15 18:56:22.467286] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.841 ERROR: process (pid: 60899) is no longer running 00:05:55.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60899) - No such process 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60883 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60883 00:05:55.841 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.408 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60883 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60883 ']' 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60883 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60883 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.409 killing process with pid 60883 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60883' 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60883 00:05:56.409 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60883 00:05:56.666 ************************************ 00:05:56.666 END TEST locking_app_on_locked_coremask 00:05:56.666 ************************************ 00:05:56.666 00:05:56.666 real 0m2.658s 00:05:56.666 user 0m3.080s 00:05:56.666 sys 0m0.618s 00:05:56.666 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.666 18:56:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.666 18:56:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.666 18:56:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.666 18:56:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.666 18:56:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.666 18:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.666 ************************************ 00:05:56.666 START TEST locking_overlapped_coremask 00:05:56.666 ************************************ 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60950 00:05:56.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60950 /var/tmp/spdk.sock 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60950 ']' 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.666 18:56:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.925 [2024-07-15 18:56:23.978848] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:56.925 [2024-07-15 18:56:23.978951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60950 ] 00:05:56.925 [2024-07-15 18:56:24.123066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.184 [2024-07-15 18:56:24.255344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.184 [2024-07-15 18:56:24.255612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.184 [2024-07-15 18:56:24.255761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.184 [2024-07-15 18:56:24.314297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60968 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60968 /var/tmp/spdk2.sock 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60968 /var/tmp/spdk2.sock 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60968 /var/tmp/spdk2.sock 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60968 ']' 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.751 18:56:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.751 [2024-07-15 18:56:25.023355] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:57.751 [2024-07-15 18:56:25.023430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60968 ] 00:05:58.010 [2024-07-15 18:56:25.166263] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60950 has claimed it. 00:05:58.010 [2024-07-15 18:56:25.166356] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.578 ERROR: process (pid: 60968) is no longer running 00:05:58.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60968) - No such process 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60950 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60950 ']' 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60950 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60950 00:05:58.578 killing process with pid 60950 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60950' 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60950 00:05:58.578 18:56:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60950 00:05:59.145 00:05:59.145 real 0m2.310s 00:05:59.145 user 0m6.343s 00:05:59.145 sys 0m0.454s 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.145 ************************************ 00:05:59.145 END TEST locking_overlapped_coremask 00:05:59.145 ************************************ 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 18:56:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.145 18:56:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.145 18:56:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.145 18:56:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.145 18:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 ************************************ 00:05:59.145 START TEST locking_overlapped_coremask_via_rpc 00:05:59.145 ************************************ 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:59.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61008 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61008 /var/tmp/spdk.sock 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61008 ']' 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.145 18:56:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.145 [2024-07-15 18:56:26.326437] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:05:59.145 [2024-07-15 18:56:26.326586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:05:59.404 [2024-07-15 18:56:26.463771] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.404 [2024-07-15 18:56:26.463830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.404 [2024-07-15 18:56:26.604552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.404 [2024-07-15 18:56:26.604792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.404 [2024-07-15 18:56:26.604785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.404 [2024-07-15 18:56:26.664725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61026 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61026 /var/tmp/spdk2.sock 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61026 ']' 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.372 18:56:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.372 [2024-07-15 18:56:27.353884] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:00.372 [2024-07-15 18:56:27.354223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:06:00.372 [2024-07-15 18:56:27.496121] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.372 [2024-07-15 18:56:27.496185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.631 [2024-07-15 18:56:27.793200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.631 [2024-07-15 18:56:27.796710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.631 [2024-07-15 18:56:27.796711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.889 [2024-07-15 18:56:27.940792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.147 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:01.148 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.148 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.148 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.148 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.148 [2024-07-15 18:56:28.434762] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61008 has claimed it. 00:06:01.405 request: 00:06:01.405 { 00:06:01.405 "method": "framework_enable_cpumask_locks", 00:06:01.405 "req_id": 1 00:06:01.405 } 00:06:01.405 Got JSON-RPC error response 00:06:01.405 response: 00:06:01.405 { 00:06:01.405 "code": -32603, 00:06:01.405 "message": "Failed to claim CPU core: 2" 00:06:01.405 } 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61008 /var/tmp/spdk.sock 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61008 ']' 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.405 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.406 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.406 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61026 /var/tmp/spdk2.sock 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61026 ']' 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.664 18:56:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.922 00:06:01.922 real 0m2.779s 00:06:01.922 user 0m1.432s 00:06:01.922 sys 0m0.207s 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.922 18:56:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.922 ************************************ 00:06:01.922 END TEST locking_overlapped_coremask_via_rpc 00:06:01.922 ************************************ 00:06:01.922 18:56:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.922 18:56:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.922 18:56:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61008 ]] 00:06:01.922 18:56:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61008 00:06:01.922 18:56:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61008 ']' 00:06:01.922 18:56:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61008 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61008 00:06:01.923 killing process with pid 61008 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61008' 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61008 00:06:01.923 18:56:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61008 00:06:02.488 18:56:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61026 ]] 00:06:02.488 18:56:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61026 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61026 ']' 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61026 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61026 00:06:02.488 killing process with pid 61026 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61026' 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61026 00:06:02.488 18:56:29 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61026 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61008 ]] 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61008 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61008 ']' 00:06:03.054 Process with pid 61008 is not found 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61008 00:06:03.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61008) - No such process 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61008 is not found' 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61026 ]] 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61026 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61026 ']' 00:06:03.054 Process with pid 61026 is not found 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61026 00:06:03.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61026) - No such process 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61026 is not found' 00:06:03.054 18:56:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.054 00:06:03.054 real 0m21.828s 00:06:03.054 user 0m37.855s 00:06:03.054 sys 0m5.657s 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.054 18:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.054 ************************************ 00:06:03.054 END TEST cpu_locks 00:06:03.054 ************************************ 00:06:03.054 18:56:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:03.054 ************************************ 00:06:03.054 END TEST event 00:06:03.054 ************************************ 00:06:03.054 00:06:03.054 real 0m50.889s 00:06:03.054 user 1m37.611s 00:06:03.054 sys 0m9.525s 00:06:03.054 18:56:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.054 18:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.054 18:56:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.054 18:56:30 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.054 18:56:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.055 18:56:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.055 18:56:30 -- common/autotest_common.sh@10 -- # set +x 00:06:03.055 ************************************ 00:06:03.055 START TEST thread 00:06:03.055 ************************************ 00:06:03.055 18:56:30 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.055 * Looking for test storage... 00:06:03.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:03.055 18:56:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.055 18:56:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:03.055 18:56:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.055 18:56:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.055 ************************************ 00:06:03.055 START TEST thread_poller_perf 00:06:03.055 ************************************ 00:06:03.055 18:56:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.055 [2024-07-15 18:56:30.324281] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:03.055 [2024-07-15 18:56:30.324493] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61154 ] 00:06:03.312 [2024-07-15 18:56:30.456545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.312 [2024-07-15 18:56:30.572028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.312 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.717 ====================================== 00:06:04.717 busy:2208546714 (cyc) 00:06:04.717 total_run_count: 314000 00:06:04.717 tsc_hz: 2200000000 (cyc) 00:06:04.717 ====================================== 00:06:04.717 poller_cost: 7033 (cyc), 3196 (nsec) 00:06:04.717 00:06:04.717 real 0m1.362s 00:06:04.717 user 0m1.198s 00:06:04.717 sys 0m0.058s 00:06:04.717 18:56:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.718 18:56:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.718 ************************************ 00:06:04.718 END TEST thread_poller_perf 00:06:04.718 ************************************ 00:06:04.718 18:56:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.718 18:56:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.718 18:56:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.718 18:56:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.718 18:56:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.718 ************************************ 00:06:04.718 START TEST thread_poller_perf 00:06:04.718 ************************************ 00:06:04.718 18:56:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.718 [2024-07-15 18:56:31.745124] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:04.718 [2024-07-15 18:56:31.745292] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61190 ] 00:06:04.718 [2024-07-15 18:56:31.894986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.975 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.975 [2024-07-15 18:56:32.013719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.907 ====================================== 00:06:05.907 busy:2202223232 (cyc) 00:06:05.907 total_run_count: 4149000 00:06:05.907 tsc_hz: 2200000000 (cyc) 00:06:05.907 ====================================== 00:06:05.907 poller_cost: 530 (cyc), 240 (nsec) 00:06:05.907 00:06:05.907 real 0m1.381s 00:06:05.907 user 0m1.197s 00:06:05.907 sys 0m0.076s 00:06:05.907 ************************************ 00:06:05.907 END TEST thread_poller_perf 00:06:05.907 ************************************ 00:06:05.907 18:56:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.907 18:56:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.907 18:56:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.907 18:56:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.907 ************************************ 00:06:05.907 END TEST thread 00:06:05.907 ************************************ 00:06:05.907 00:06:05.907 real 0m2.930s 00:06:05.907 user 0m2.449s 00:06:05.907 sys 0m0.259s 00:06:05.907 18:56:33 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.907 18:56:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.907 18:56:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.907 18:56:33 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:05.907 18:56:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.907 18:56:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.907 18:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:05.907 ************************************ 00:06:05.907 START TEST accel 00:06:05.907 ************************************ 00:06:05.907 18:56:33 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:06.164 * Looking for test storage... 00:06:06.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:06.164 18:56:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:06.164 18:56:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:06.164 18:56:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.164 18:56:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61263 00:06:06.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.164 18:56:33 accel -- accel/accel.sh@63 -- # waitforlisten 61263 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@829 -- # '[' -z 61263 ']' 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.164 18:56:33 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.164 18:56:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.164 18:56:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:06.164 18:56:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.164 18:56:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.164 18:56:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.164 18:56:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.164 18:56:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.164 18:56:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:06.164 18:56:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:06.164 [2024-07-15 18:56:33.352170] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:06.164 [2024-07-15 18:56:33.352302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:06:06.421 [2024-07-15 18:56:33.495172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.421 [2024-07-15 18:56:33.615279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.421 [2024-07-15 18:56:33.668536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.986 18:56:34 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.986 18:56:34 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.986 18:56:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.986 18:56:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.986 18:56:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.986 18:56:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.986 18:56:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.986 18:56:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.243 18:56:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:07.243 18:56:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:07.243 18:56:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:07.243 18:56:34 accel -- accel/accel.sh@75 -- # killprocess 61263 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@948 -- # '[' -z 61263 ']' 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@952 -- # kill -0 61263 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@953 -- # uname 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61263 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.243 killing process with pid 61263 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61263' 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@967 -- # kill 61263 00:06:07.243 18:56:34 accel -- common/autotest_common.sh@972 -- # wait 61263 00:06:07.501 18:56:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:07.501 18:56:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:07.501 18:56:34 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:07.501 18:56:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.501 18:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.501 18:56:34 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:07.501 18:56:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:07.792 18:56:34 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.792 18:56:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:07.792 18:56:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.792 18:56:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:07.792 18:56:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.792 18:56:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.792 18:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.792 ************************************ 00:06:07.792 START TEST accel_missing_filename 00:06:07.792 ************************************ 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.792 18:56:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:07.792 18:56:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:07.792 [2024-07-15 18:56:34.863972] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:07.792 [2024-07-15 18:56:34.864053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:06:07.792 [2024-07-15 18:56:34.998170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.051 [2024-07-15 18:56:35.115952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.051 [2024-07-15 18:56:35.172658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.051 [2024-07-15 18:56:35.247207] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:08.051 A filename is required. 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.051 00:06:08.051 real 0m0.499s 00:06:08.051 user 0m0.331s 00:06:08.051 sys 0m0.112s 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.051 18:56:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:08.051 ************************************ 00:06:08.051 END TEST accel_missing_filename 00:06:08.051 ************************************ 00:06:08.309 18:56:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.310 18:56:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:08.310 18:56:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:08.310 18:56:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.310 18:56:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.310 ************************************ 00:06:08.310 START TEST accel_compress_verify 00:06:08.310 ************************************ 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.310 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:08.310 18:56:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:08.310 [2024-07-15 18:56:35.412269] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:08.310 [2024-07-15 18:56:35.412916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61339 ] 00:06:08.310 [2024-07-15 18:56:35.552139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.567 [2024-07-15 18:56:35.666938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.567 [2024-07-15 18:56:35.721108] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.567 [2024-07-15 18:56:35.796351] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:08.825 00:06:08.826 Compression does not support the verify option, aborting. 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.826 00:06:08.826 real 0m0.498s 00:06:08.826 user 0m0.332s 00:06:08.826 sys 0m0.112s 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.826 ************************************ 00:06:08.826 18:56:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 END TEST accel_compress_verify 00:06:08.826 ************************************ 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.826 18:56:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 ************************************ 00:06:08.826 START TEST accel_wrong_workload 00:06:08.826 ************************************ 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:08.826 18:56:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:08.826 Unsupported workload type: foobar 00:06:08.826 [2024-07-15 18:56:35.955123] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:08.826 accel_perf options: 00:06:08.826 [-h help message] 00:06:08.826 [-q queue depth per core] 00:06:08.826 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.826 [-T number of threads per core 00:06:08.826 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.826 [-t time in seconds] 00:06:08.826 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.826 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.826 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.826 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.826 [-S for crc32c workload, use this seed value (default 0) 00:06:08.826 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.826 [-f for fill workload, use this BYTE value (default 255) 00:06:08.826 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.826 [-y verify result if this switch is on] 00:06:08.826 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.826 Can be used to spread operations across a wider range of memory. 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.826 00:06:08.826 real 0m0.029s 00:06:08.826 user 0m0.015s 00:06:08.826 sys 0m0.014s 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.826 18:56:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 ************************************ 00:06:08.826 END TEST accel_wrong_workload 00:06:08.826 ************************************ 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.826 18:56:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.826 18:56:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 ************************************ 00:06:08.826 START TEST accel_negative_buffers 00:06:08.826 ************************************ 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:08.826 18:56:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:08.826 -x option must be non-negative. 00:06:08.826 [2024-07-15 18:56:36.033431] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:08.826 accel_perf options: 00:06:08.826 [-h help message] 00:06:08.826 [-q queue depth per core] 00:06:08.826 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:08.826 [-T number of threads per core 00:06:08.826 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:08.826 [-t time in seconds] 00:06:08.826 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:08.826 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:08.826 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:08.826 [-l for compress/decompress workloads, name of uncompressed input file 00:06:08.826 [-S for crc32c workload, use this seed value (default 0) 00:06:08.826 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:08.826 [-f for fill workload, use this BYTE value (default 255) 00:06:08.826 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:08.826 [-y verify result if this switch is on] 00:06:08.826 [-a tasks to allocate per core (default: same value as -q)] 00:06:08.826 Can be used to spread operations across a wider range of memory. 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.826 00:06:08.826 real 0m0.031s 00:06:08.826 user 0m0.021s 00:06:08.826 sys 0m0.010s 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.826 18:56:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 ************************************ 00:06:08.826 END TEST accel_negative_buffers 00:06:08.826 ************************************ 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.826 18:56:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.826 18:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 ************************************ 00:06:08.826 START TEST accel_crc32c 00:06:08.826 ************************************ 00:06:08.826 18:56:36 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.826 18:56:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.826 [2024-07-15 18:56:36.111634] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:08.827 [2024-07-15 18:56:36.111717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:06:09.084 [2024-07-15 18:56:36.243469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.084 [2024-07-15 18:56:36.360371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.342 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.343 18:56:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:10.744 18:56:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.744 00:06:10.744 real 0m1.553s 00:06:10.744 user 0m1.345s 00:06:10.744 sys 0m0.115s 00:06:10.744 18:56:37 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.744 18:56:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:10.744 ************************************ 00:06:10.744 END TEST accel_crc32c 00:06:10.744 ************************************ 00:06:10.744 18:56:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.744 18:56:37 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:10.744 18:56:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.744 18:56:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.744 18:56:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.744 ************************************ 00:06:10.744 START TEST accel_crc32c_C2 00:06:10.744 ************************************ 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.744 18:56:37 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:10.744 [2024-07-15 18:56:37.717190] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:10.744 [2024-07-15 18:56:37.718029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:06:10.744 [2024-07-15 18:56:37.856159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.744 [2024-07-15 18:56:38.004583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.004 18:56:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.380 00:06:12.380 real 0m1.622s 00:06:12.380 user 0m1.375s 00:06:12.380 sys 0m0.153s 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.380 18:56:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:12.380 ************************************ 00:06:12.380 END TEST accel_crc32c_C2 00:06:12.380 ************************************ 00:06:12.380 18:56:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.380 18:56:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:12.380 18:56:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.380 18:56:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.380 18:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.380 ************************************ 00:06:12.380 START TEST accel_copy 00:06:12.380 ************************************ 00:06:12.380 18:56:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:12.380 18:56:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:12.380 [2024-07-15 18:56:39.389348] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:12.380 [2024-07-15 18:56:39.389456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:06:12.380 [2024-07-15 18:56:39.526853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.380 [2024-07-15 18:56:39.654933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.639 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.640 18:56:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:14.016 18:56:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.016 00:06:14.016 real 0m1.516s 00:06:14.016 user 0m1.309s 00:06:14.016 sys 0m0.113s 00:06:14.016 ************************************ 00:06:14.016 END TEST accel_copy 00:06:14.016 ************************************ 00:06:14.016 18:56:40 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.016 18:56:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.016 18:56:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.016 18:56:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.016 18:56:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:14.016 18:56:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.016 18:56:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.016 ************************************ 00:06:14.016 START TEST accel_fill 00:06:14.016 ************************************ 00:06:14.016 18:56:40 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:14.016 18:56:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:14.016 [2024-07-15 18:56:40.951867] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:14.016 [2024-07-15 18:56:40.951966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61508 ] 00:06:14.016 [2024-07-15 18:56:41.091625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.016 [2024-07-15 18:56:41.213099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.016 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.017 18:56:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:15.393 18:56:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.393 00:06:15.393 real 0m1.514s 00:06:15.393 user 0m1.302s 00:06:15.393 sys 0m0.117s 00:06:15.393 18:56:42 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.393 18:56:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:15.393 ************************************ 00:06:15.393 END TEST accel_fill 00:06:15.393 ************************************ 00:06:15.393 18:56:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.393 18:56:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:15.393 18:56:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.393 18:56:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.393 18:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.393 ************************************ 00:06:15.393 START TEST accel_copy_crc32c 00:06:15.393 ************************************ 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:15.393 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:15.393 [2024-07-15 18:56:42.516915] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:15.393 [2024-07-15 18:56:42.516996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61537 ] 00:06:15.393 [2024-07-15 18:56:42.650982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.652 [2024-07-15 18:56:42.772040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.652 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.653 18:56:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.071 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.071 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 ************************************ 00:06:17.072 END TEST accel_copy_crc32c 00:06:17.072 ************************************ 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.072 00:06:17.072 real 0m1.504s 00:06:17.072 user 0m1.301s 00:06:17.072 sys 0m0.110s 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.072 18:56:43 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:17.072 18:56:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.072 18:56:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.072 18:56:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:17.072 18:56:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.072 18:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.072 ************************************ 00:06:17.072 START TEST accel_copy_crc32c_C2 00:06:17.072 ************************************ 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.072 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.072 [2024-07-15 18:56:44.071276] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:17.072 [2024-07-15 18:56:44.071378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61577 ] 00:06:17.072 [2024-07-15 18:56:44.212623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.072 [2024-07-15 18:56:44.346474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.331 18:56:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 ************************************ 00:06:18.710 END TEST accel_copy_crc32c_C2 00:06:18.710 ************************************ 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.710 00:06:18.710 real 0m1.534s 00:06:18.710 user 0m1.324s 00:06:18.710 sys 0m0.118s 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.710 18:56:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:18.710 18:56:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.710 18:56:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:18.710 18:56:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.710 18:56:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.710 18:56:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.710 ************************************ 00:06:18.710 START TEST accel_dualcast 00:06:18.710 ************************************ 00:06:18.710 18:56:45 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:18.710 [2024-07-15 18:56:45.665903] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:18.710 [2024-07-15 18:56:45.666011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:06:18.710 [2024-07-15 18:56:45.802108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.710 [2024-07-15 18:56:45.902392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.710 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.711 18:56:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.087 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.088 18:56:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.088 18:56:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.088 18:56:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:20.088 18:56:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.088 00:06:20.088 real 0m1.510s 00:06:20.088 user 0m1.307s 00:06:20.088 sys 0m0.107s 00:06:20.088 18:56:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.088 18:56:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:20.088 ************************************ 00:06:20.088 END TEST accel_dualcast 00:06:20.088 ************************************ 00:06:20.088 18:56:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.088 18:56:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:20.088 18:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.088 18:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.088 18:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.088 ************************************ 00:06:20.088 START TEST accel_compare 00:06:20.088 ************************************ 00:06:20.088 18:56:47 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:20.088 18:56:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:20.088 [2024-07-15 18:56:47.231884] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:20.088 [2024-07-15 18:56:47.232016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61646 ] 00:06:20.088 [2024-07-15 18:56:47.363763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.347 [2024-07-15 18:56:47.483054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:20.347 18:56:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:21.725 18:56:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.725 00:06:21.725 real 0m1.517s 00:06:21.725 user 0m1.306s 00:06:21.725 sys 0m0.119s 00:06:21.725 ************************************ 00:06:21.725 END TEST accel_compare 00:06:21.725 ************************************ 00:06:21.725 18:56:48 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.725 18:56:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 18:56:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.725 18:56:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:21.725 18:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.725 18:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.725 18:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 ************************************ 00:06:21.725 START TEST accel_xor 00:06:21.725 ************************************ 00:06:21.725 18:56:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.725 18:56:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.725 [2024-07-15 18:56:48.805888] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:21.725 [2024-07-15 18:56:48.805992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61675 ] 00:06:21.725 [2024-07-15 18:56:48.946892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.019 [2024-07-15 18:56:49.057603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 18:56:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.397 00:06:23.397 real 0m1.506s 00:06:23.397 user 0m1.293s 00:06:23.397 sys 0m0.117s 00:06:23.397 ************************************ 00:06:23.397 END TEST accel_xor 00:06:23.397 ************************************ 00:06:23.397 18:56:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.397 18:56:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:23.397 18:56:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.397 18:56:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:23.397 18:56:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:23.397 18:56:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.397 18:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.397 ************************************ 00:06:23.397 START TEST accel_xor 00:06:23.397 ************************************ 00:06:23.397 18:56:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:23.397 [2024-07-15 18:56:50.365079] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:23.397 [2024-07-15 18:56:50.365187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61715 ] 00:06:23.397 [2024-07-15 18:56:50.503586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.397 [2024-07-15 18:56:50.622846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.397 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.655 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:23.655 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.655 18:56:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:23.655 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.656 18:56:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.590 18:56:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.590 00:06:24.590 real 0m1.511s 00:06:24.590 user 0m1.302s 00:06:24.590 sys 0m0.117s 00:06:24.590 18:56:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.590 ************************************ 00:06:24.590 END TEST accel_xor 00:06:24.590 ************************************ 00:06:24.590 18:56:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:24.848 18:56:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.848 18:56:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:24.848 18:56:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.848 18:56:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.848 18:56:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.848 ************************************ 00:06:24.848 START TEST accel_dif_verify 00:06:24.848 ************************************ 00:06:24.848 18:56:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:24.848 18:56:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:24.849 [2024-07-15 18:56:51.933561] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:24.849 [2024-07-15 18:56:51.933675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:06:24.849 [2024-07-15 18:56:52.071007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.107 [2024-07-15 18:56:52.188370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:25.107 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.108 18:56:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.479 18:56:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.479 00:06:26.479 real 0m1.504s 00:06:26.479 user 0m1.301s 00:06:26.479 sys 0m0.112s 00:06:26.479 18:56:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.479 ************************************ 00:06:26.479 END TEST accel_dif_verify 00:06:26.479 ************************************ 00:06:26.479 18:56:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.479 18:56:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.479 18:56:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.479 18:56:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:26.479 18:56:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.479 18:56:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.479 ************************************ 00:06:26.479 START TEST accel_dif_generate 00:06:26.479 ************************************ 00:06:26.479 18:56:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:26.479 18:56:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:26.479 [2024-07-15 18:56:53.482177] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:26.479 [2024-07-15 18:56:53.482294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61784 ] 00:06:26.479 [2024-07-15 18:56:53.626040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.479 [2024-07-15 18:56:53.744612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.737 18:56:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:28.112 18:56:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.112 00:06:28.112 real 0m1.516s 00:06:28.112 user 0m1.304s 00:06:28.112 sys 0m0.118s 00:06:28.112 18:56:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.112 18:56:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:28.112 ************************************ 00:06:28.112 END TEST accel_dif_generate 00:06:28.112 ************************************ 00:06:28.112 18:56:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.112 18:56:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.112 18:56:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.112 18:56:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.112 18:56:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.112 ************************************ 00:06:28.112 START TEST accel_dif_generate_copy 00:06:28.112 ************************************ 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.112 [2024-07-15 18:56:55.049767] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:28.112 [2024-07-15 18:56:55.049881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61813 ] 00:06:28.112 [2024-07-15 18:56:55.181245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.112 [2024-07-15 18:56:55.300476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.112 18:56:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.484 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.485 ************************************ 00:06:29.485 END TEST accel_dif_generate_copy 00:06:29.485 ************************************ 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.485 00:06:29.485 real 0m1.506s 00:06:29.485 user 0m1.298s 00:06:29.485 sys 0m0.115s 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.485 18:56:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.485 18:56:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.485 18:56:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:29.485 18:56:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.485 18:56:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:29.485 18:56:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.485 18:56:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.485 ************************************ 00:06:29.485 START TEST accel_comp 00:06:29.485 ************************************ 00:06:29.485 18:56:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.485 18:56:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:29.485 [2024-07-15 18:56:56.605794] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:29.485 [2024-07-15 18:56:56.605901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61855 ] 00:06:29.485 [2024-07-15 18:56:56.743809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.743 [2024-07-15 18:56:56.862340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.743 18:56:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.117 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:31.118 18:56:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.118 00:06:31.118 real 0m1.498s 00:06:31.118 user 0m1.277s 00:06:31.118 sys 0m0.130s 00:06:31.118 18:56:58 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.118 18:56:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:31.118 ************************************ 00:06:31.118 END TEST accel_comp 00:06:31.118 ************************************ 00:06:31.118 18:56:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.118 18:56:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:31.118 18:56:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.118 18:56:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.118 18:56:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.118 ************************************ 00:06:31.118 START TEST accel_decomp 00:06:31.118 ************************************ 00:06:31.118 18:56:58 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.118 18:56:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:31.118 [2024-07-15 18:56:58.143371] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:31.118 [2024-07-15 18:56:58.143452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:06:31.118 [2024-07-15 18:56:58.273274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.118 [2024-07-15 18:56:58.373686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.376 18:56:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.752 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.752 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.753 18:56:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.753 00:06:32.753 real 0m1.486s 00:06:32.753 user 0m1.272s 00:06:32.753 sys 0m0.123s 00:06:32.753 18:56:59 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.753 ************************************ 00:06:32.753 END TEST accel_decomp 00:06:32.753 ************************************ 00:06:32.753 18:56:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:32.753 18:56:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.753 18:56:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.753 18:56:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:32.753 18:56:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.753 18:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.753 ************************************ 00:06:32.753 START TEST accel_decomp_full 00:06:32.753 ************************************ 00:06:32.753 18:56:59 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:32.753 [2024-07-15 18:56:59.688195] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:32.753 [2024-07-15 18:56:59.688297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:06:32.753 [2024-07-15 18:56:59.826236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.753 [2024-07-15 18:56:59.917817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.753 18:56:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.155 18:57:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.155 00:06:34.155 real 0m1.501s 00:06:34.155 user 0m1.289s 00:06:34.155 sys 0m0.119s 00:06:34.155 18:57:01 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.155 18:57:01 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:34.155 ************************************ 00:06:34.155 END TEST accel_decomp_full 00:06:34.155 ************************************ 00:06:34.155 18:57:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.155 18:57:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.155 18:57:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.155 18:57:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.155 18:57:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.155 ************************************ 00:06:34.155 START TEST accel_decomp_mcore 00:06:34.155 ************************************ 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:34.155 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:34.155 [2024-07-15 18:57:01.242716] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:34.155 [2024-07-15 18:57:01.242783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:06:34.155 [2024-07-15 18:57:01.375975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.414 [2024-07-15 18:57:01.477565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.414 [2024-07-15 18:57:01.477696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.414 [2024-07-15 18:57:01.477804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.414 [2024-07-15 18:57:01.477947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.414 18:57:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.791 00:06:35.791 real 0m1.500s 00:06:35.791 user 0m4.679s 00:06:35.791 sys 0m0.131s 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.791 ************************************ 00:06:35.791 END TEST accel_decomp_mcore 00:06:35.791 ************************************ 00:06:35.791 18:57:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.791 18:57:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.791 18:57:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.791 18:57:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.791 18:57:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.791 18:57:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.791 ************************************ 00:06:35.791 START TEST accel_decomp_full_mcore 00:06:35.791 ************************************ 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.791 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.792 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.792 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.792 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:35.792 18:57:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:35.792 [2024-07-15 18:57:02.796449] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:35.792 [2024-07-15 18:57:02.796587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61995 ] 00:06:35.792 [2024-07-15 18:57:02.935706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.792 [2024-07-15 18:57:03.039851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.792 [2024-07-15 18:57:03.040016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.792 [2024-07-15 18:57:03.041318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.792 [2024-07-15 18:57:03.041333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.050 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.051 18:57:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.426 00:06:37.426 real 0m1.528s 00:06:37.426 user 0m4.770s 00:06:37.426 sys 0m0.131s 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.426 18:57:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.426 ************************************ 00:06:37.426 END TEST accel_decomp_full_mcore 00:06:37.426 ************************************ 00:06:37.426 18:57:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.426 18:57:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:37.426 18:57:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:37.426 18:57:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.426 18:57:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.426 ************************************ 00:06:37.426 START TEST accel_decomp_mthread 00:06:37.426 ************************************ 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:37.426 [2024-07-15 18:57:04.379970] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:37.426 [2024-07-15 18:57:04.380082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62033 ] 00:06:37.426 [2024-07-15 18:57:04.519584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.426 [2024-07-15 18:57:04.616983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.426 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.427 18:57:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.804 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.805 00:06:38.805 real 0m1.483s 00:06:38.805 user 0m1.272s 00:06:38.805 sys 0m0.119s 00:06:38.805 ************************************ 00:06:38.805 END TEST accel_decomp_mthread 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.805 18:57:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.805 ************************************ 00:06:38.805 18:57:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.805 18:57:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.805 18:57:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:38.805 18:57:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.805 18:57:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.805 ************************************ 00:06:38.805 START TEST accel_decomp_full_mthread 00:06:38.805 ************************************ 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.805 18:57:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.805 [2024-07-15 18:57:05.917252] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:38.805 [2024-07-15 18:57:05.917343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62066 ] 00:06:38.805 [2024-07-15 18:57:06.055690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.064 [2024-07-15 18:57:06.168632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.064 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.065 18:57:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.442 18:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.442 00:06:40.442 real 0m1.535s 00:06:40.443 user 0m1.315s 00:06:40.443 sys 0m0.125s 00:06:40.443 ************************************ 00:06:40.443 END TEST accel_decomp_full_mthread 00:06:40.443 ************************************ 00:06:40.443 18:57:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.443 18:57:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.443 18:57:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.443 18:57:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:40.443 18:57:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.443 18:57:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:40.443 18:57:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.443 18:57:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.443 18:57:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.443 18:57:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.443 18:57:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.443 18:57:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.443 18:57:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.443 18:57:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.443 18:57:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.443 18:57:07 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.443 ************************************ 00:06:40.443 START TEST accel_dif_functional_tests 00:06:40.443 ************************************ 00:06:40.443 18:57:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.443 [2024-07-15 18:57:07.516347] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:40.443 [2024-07-15 18:57:07.516457] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:06:40.443 [2024-07-15 18:57:07.647972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.702 [2024-07-15 18:57:07.743888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.702 [2024-07-15 18:57:07.744139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.702 [2024-07-15 18:57:07.744142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.702 [2024-07-15 18:57:07.800224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.702 00:06:40.702 00:06:40.702 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.702 http://cunit.sourceforge.net/ 00:06:40.702 00:06:40.702 00:06:40.702 Suite: accel_dif 00:06:40.702 Test: verify: DIF generated, GUARD check ...passed 00:06:40.702 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.702 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.702 Test: verify: DIF not generated, GUARD check ...passed 00:06:40.702 Test: verify: DIF not generated, APPTAG check ...passed 00:06:40.702 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 18:57:07.836264] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.702 [2024-07-15 18:57:07.836544] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.702 [2024-07-15 18:57:07.836688] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.702 passed 00:06:40.702 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.702 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:40.702 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.702 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.702 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-15 18:57:07.836868] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.702 passed 00:06:40.702 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:40.702 Test: verify copy: DIF generated, GUARD check ...passed 00:06:40.702 Test: verify copy: DIF generated, APPTAG check ...passed[2024-07-15 18:57:07.837113] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.702 00:06:40.702 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:40.702 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:40.702 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 18:57:07.837425] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.702 [2024-07-15 18:57:07.837573] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.702 passed 00:06:40.702 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:40.702 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.702 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.702 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.702 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-15 18:57:07.837694] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.702 passed 00:06:40.702 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.702 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.702 Test: generate copy: iovecs-len validate ...passed 00:06:40.702 Test: generate copy: buffer alignment validate ...passed 00:06:40.702 00:06:40.702 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.702 suites 1 1 n/a 0 0 00:06:40.702 tests 26 26 26 0 0 00:06:40.702 asserts 115 115 115 0 n/a 00:06:40.702 00:06:40.702 Elapsed time = 0.004 seconds 00:06:40.702 [2024-07-15 18:57:07.838042] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.961 00:06:40.961 real 0m0.580s 00:06:40.961 user 0m0.774s 00:06:40.961 sys 0m0.159s 00:06:40.961 18:57:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.961 ************************************ 00:06:40.961 END TEST accel_dif_functional_tests 00:06:40.961 ************************************ 00:06:40.961 18:57:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:40.961 18:57:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.961 00:06:40.961 real 0m34.897s 00:06:40.961 user 0m36.582s 00:06:40.961 sys 0m4.002s 00:06:40.961 18:57:08 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.961 18:57:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.961 ************************************ 00:06:40.961 END TEST accel 00:06:40.961 ************************************ 00:06:40.961 18:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.961 18:57:08 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:40.961 18:57:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.961 18:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.961 18:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:40.961 ************************************ 00:06:40.961 START TEST accel_rpc 00:06:40.961 ************************************ 00:06:40.961 18:57:08 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:40.961 * Looking for test storage... 00:06:40.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:40.961 18:57:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.961 18:57:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62170 00:06:40.961 18:57:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62170 00:06:40.962 18:57:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62170 ']' 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.962 18:57:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.220 [2024-07-15 18:57:08.289919] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:41.220 [2024-07-15 18:57:08.290478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62170 ] 00:06:41.220 [2024-07-15 18:57:08.429973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.479 [2024-07-15 18:57:08.554602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.045 18:57:09 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.045 18:57:09 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.045 18:57:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:42.045 18:57:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:42.045 18:57:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:42.045 18:57:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:42.045 18:57:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:42.045 18:57:09 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.045 18:57:09 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.045 18:57:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.045 ************************************ 00:06:42.045 START TEST accel_assign_opcode 00:06:42.045 ************************************ 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.045 [2024-07-15 18:57:09.235236] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.045 [2024-07-15 18:57:09.243229] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.045 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.045 [2024-07-15 18:57:09.310807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.303 software 00:06:42.303 00:06:42.303 real 0m0.305s 00:06:42.303 user 0m0.052s 00:06:42.303 sys 0m0.013s 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.303 18:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.303 ************************************ 00:06:42.303 END TEST accel_assign_opcode 00:06:42.303 ************************************ 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:42.303 18:57:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62170 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62170 ']' 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62170 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.303 18:57:09 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62170 00:06:42.562 18:57:09 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.562 18:57:09 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.562 killing process with pid 62170 00:06:42.562 18:57:09 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62170' 00:06:42.562 18:57:09 accel_rpc -- common/autotest_common.sh@967 -- # kill 62170 00:06:42.562 18:57:09 accel_rpc -- common/autotest_common.sh@972 -- # wait 62170 00:06:42.821 00:06:42.821 real 0m1.859s 00:06:42.821 user 0m1.909s 00:06:42.821 sys 0m0.455s 00:06:42.821 18:57:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.821 ************************************ 00:06:42.821 END TEST accel_rpc 00:06:42.821 ************************************ 00:06:42.821 18:57:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.821 18:57:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.821 18:57:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.821 18:57:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.821 18:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.821 18:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:42.821 ************************************ 00:06:42.821 START TEST app_cmdline 00:06:42.821 ************************************ 00:06:42.821 18:57:10 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:43.081 * Looking for test storage... 00:06:43.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:43.081 18:57:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:43.081 18:57:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62258 00:06:43.081 18:57:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62258 00:06:43.081 18:57:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62258 ']' 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.081 18:57:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.081 [2024-07-15 18:57:10.198435] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:43.081 [2024-07-15 18:57:10.198565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62258 ] 00:06:43.081 [2024-07-15 18:57:10.338482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.341 [2024-07-15 18:57:10.455778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.341 [2024-07-15 18:57:10.513681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.910 18:57:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.910 18:57:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:43.910 18:57:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:44.169 { 00:06:44.169 "version": "SPDK v24.09-pre git sha1 cdc37ee83", 00:06:44.169 "fields": { 00:06:44.169 "major": 24, 00:06:44.169 "minor": 9, 00:06:44.169 "patch": 0, 00:06:44.169 "suffix": "-pre", 00:06:44.169 "commit": "cdc37ee83" 00:06:44.169 } 00:06:44.169 } 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.169 18:57:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.169 18:57:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.169 18:57:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.169 18:57:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.428 18:57:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.428 18:57:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.428 18:57:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:44.428 18:57:11 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.687 request: 00:06:44.687 { 00:06:44.687 "method": "env_dpdk_get_mem_stats", 00:06:44.687 "req_id": 1 00:06:44.687 } 00:06:44.687 Got JSON-RPC error response 00:06:44.687 response: 00:06:44.687 { 00:06:44.687 "code": -32601, 00:06:44.687 "message": "Method not found" 00:06:44.687 } 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.687 18:57:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62258 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62258 ']' 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62258 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62258 00:06:44.687 killing process with pid 62258 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62258' 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 62258 00:06:44.687 18:57:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 62258 00:06:45.255 00:06:45.255 real 0m2.201s 00:06:45.255 user 0m2.746s 00:06:45.255 sys 0m0.495s 00:06:45.255 18:57:12 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.255 18:57:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.255 ************************************ 00:06:45.255 END TEST app_cmdline 00:06:45.255 ************************************ 00:06:45.255 18:57:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.255 18:57:12 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.255 18:57:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.255 18:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.255 18:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:45.255 ************************************ 00:06:45.255 START TEST version 00:06:45.255 ************************************ 00:06:45.255 18:57:12 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.255 * Looking for test storage... 00:06:45.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.255 18:57:12 version -- app/version.sh@17 -- # get_header_version major 00:06:45.255 18:57:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # cut -f2 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.255 18:57:12 version -- app/version.sh@17 -- # major=24 00:06:45.255 18:57:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.255 18:57:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # cut -f2 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.255 18:57:12 version -- app/version.sh@18 -- # minor=9 00:06:45.255 18:57:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.255 18:57:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # cut -f2 00:06:45.255 18:57:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.255 18:57:12 version -- app/version.sh@19 -- # patch=0 00:06:45.256 18:57:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.256 18:57:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.256 18:57:12 version -- app/version.sh@14 -- # cut -f2 00:06:45.256 18:57:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.256 18:57:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.256 18:57:12 version -- app/version.sh@22 -- # version=24.9 00:06:45.256 18:57:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.256 18:57:12 version -- app/version.sh@28 -- # version=24.9rc0 00:06:45.256 18:57:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:45.256 18:57:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.256 18:57:12 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:45.256 18:57:12 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:45.256 00:06:45.256 real 0m0.153s 00:06:45.256 user 0m0.082s 00:06:45.256 sys 0m0.107s 00:06:45.256 18:57:12 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.256 18:57:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.256 ************************************ 00:06:45.256 END TEST version 00:06:45.256 ************************************ 00:06:45.256 18:57:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.256 18:57:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:45.256 18:57:12 -- spdk/autotest.sh@198 -- # uname -s 00:06:45.256 18:57:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:45.256 18:57:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:45.256 18:57:12 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:45.256 18:57:12 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:45.256 18:57:12 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:45.256 18:57:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.256 18:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.256 18:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:45.256 ************************************ 00:06:45.256 START TEST spdk_dd 00:06:45.256 ************************************ 00:06:45.256 18:57:12 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:45.515 * Looking for test storage... 00:06:45.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.515 18:57:12 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.515 18:57:12 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.515 18:57:12 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.515 18:57:12 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.515 18:57:12 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.515 18:57:12 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.515 18:57:12 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.515 18:57:12 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:45.515 18:57:12 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.515 18:57:12 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.775 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.775 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.775 18:57:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:45.775 18:57:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:45.775 18:57:13 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:45.775 18:57:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:45.775 18:57:13 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:45.775 18:57:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:45.775 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:45.775 18:57:13 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:45.775 18:57:13 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:46.036 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:46.037 * spdk_dd linked to liburing 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:46.037 18:57:13 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:46.037 18:57:13 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:46.038 18:57:13 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:46.038 18:57:13 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:46.038 18:57:13 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:46.038 18:57:13 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:46.038 18:57:13 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:46.038 18:57:13 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:46.038 18:57:13 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:46.038 18:57:13 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:46.038 18:57:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:46.038 18:57:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.038 18:57:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:46.038 ************************************ 00:06:46.038 START TEST spdk_dd_basic_rw 00:06:46.038 ************************************ 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:46.038 * Looking for test storage... 00:06:46.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:46.038 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:46.298 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:46.298 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.299 ************************************ 00:06:46.299 START TEST dd_bs_lt_native_bs 00:06:46.299 ************************************ 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.299 18:57:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:46.299 { 00:06:46.299 "subsystems": [ 00:06:46.299 { 00:06:46.299 "subsystem": "bdev", 00:06:46.299 "config": [ 00:06:46.299 { 00:06:46.299 "params": { 00:06:46.299 "trtype": "pcie", 00:06:46.299 "traddr": "0000:00:10.0", 00:06:46.299 "name": "Nvme0" 00:06:46.299 }, 00:06:46.299 "method": "bdev_nvme_attach_controller" 00:06:46.299 }, 00:06:46.299 { 00:06:46.299 "method": "bdev_wait_for_examine" 00:06:46.299 } 00:06:46.299 ] 00:06:46.299 } 00:06:46.299 ] 00:06:46.299 } 00:06:46.299 [2024-07-15 18:57:13.487342] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:46.299 [2024-07-15 18:57:13.487445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62583 ] 00:06:46.557 [2024-07-15 18:57:13.627612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.557 [2024-07-15 18:57:13.742136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.557 [2024-07-15 18:57:13.805960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.816 [2024-07-15 18:57:13.917786] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:46.816 [2024-07-15 18:57:13.917875] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.816 [2024-07-15 18:57:14.043924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.074 00:06:47.074 real 0m0.700s 00:06:47.074 user 0m0.464s 00:06:47.074 sys 0m0.187s 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:47.074 ************************************ 00:06:47.074 END TEST dd_bs_lt_native_bs 00:06:47.074 ************************************ 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:47.074 18:57:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.075 ************************************ 00:06:47.075 START TEST dd_rw 00:06:47.075 ************************************ 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:47.075 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.642 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:47.642 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.642 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.642 18:57:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.642 [2024-07-15 18:57:14.915220] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:47.643 [2024-07-15 18:57:14.915329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62620 ] 00:06:47.643 { 00:06:47.643 "subsystems": [ 00:06:47.643 { 00:06:47.643 "subsystem": "bdev", 00:06:47.643 "config": [ 00:06:47.643 { 00:06:47.643 "params": { 00:06:47.643 "trtype": "pcie", 00:06:47.643 "traddr": "0000:00:10.0", 00:06:47.643 "name": "Nvme0" 00:06:47.643 }, 00:06:47.643 "method": "bdev_nvme_attach_controller" 00:06:47.643 }, 00:06:47.643 { 00:06:47.643 "method": "bdev_wait_for_examine" 00:06:47.643 } 00:06:47.643 ] 00:06:47.643 } 00:06:47.643 ] 00:06:47.643 } 00:06:47.901 [2024-07-15 18:57:15.057056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.901 [2024-07-15 18:57:15.164848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.160 [2024-07-15 18:57:15.228003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.419  Copying: 60/60 [kB] (average 29 MBps) 00:06:48.419 00:06:48.419 18:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:48.419 18:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.419 18:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.419 18:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.419 { 00:06:48.419 "subsystems": [ 00:06:48.419 { 00:06:48.419 "subsystem": "bdev", 00:06:48.419 "config": [ 00:06:48.419 { 00:06:48.419 "params": { 00:06:48.419 "trtype": "pcie", 00:06:48.419 "traddr": "0000:00:10.0", 00:06:48.419 "name": "Nvme0" 00:06:48.419 }, 00:06:48.419 "method": "bdev_nvme_attach_controller" 00:06:48.419 }, 00:06:48.419 { 00:06:48.419 "method": "bdev_wait_for_examine" 00:06:48.419 } 00:06:48.419 ] 00:06:48.419 } 00:06:48.419 ] 00:06:48.419 } 00:06:48.419 [2024-07-15 18:57:15.632570] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:48.419 [2024-07-15 18:57:15.632673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62633 ] 00:06:48.679 [2024-07-15 18:57:15.769325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.679 [2024-07-15 18:57:15.885211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.679 [2024-07-15 18:57:15.945885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.198  Copying: 60/60 [kB] (average 19 MBps) 00:06:49.198 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.198 18:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.198 [2024-07-15 18:57:16.357106] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:49.198 [2024-07-15 18:57:16.357217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:06:49.198 { 00:06:49.198 "subsystems": [ 00:06:49.198 { 00:06:49.198 "subsystem": "bdev", 00:06:49.198 "config": [ 00:06:49.198 { 00:06:49.198 "params": { 00:06:49.198 "trtype": "pcie", 00:06:49.198 "traddr": "0000:00:10.0", 00:06:49.198 "name": "Nvme0" 00:06:49.198 }, 00:06:49.198 "method": "bdev_nvme_attach_controller" 00:06:49.198 }, 00:06:49.198 { 00:06:49.198 "method": "bdev_wait_for_examine" 00:06:49.198 } 00:06:49.198 ] 00:06:49.198 } 00:06:49.198 ] 00:06:49.198 } 00:06:49.457 [2024-07-15 18:57:16.489212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.457 [2024-07-15 18:57:16.609025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.457 [2024-07-15 18:57:16.671418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.975  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:49.975 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:49.975 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.543 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:50.543 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:50.543 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.543 18:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.543 [2024-07-15 18:57:17.752355] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:50.543 [2024-07-15 18:57:17.752470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62679 ] 00:06:50.543 { 00:06:50.543 "subsystems": [ 00:06:50.543 { 00:06:50.543 "subsystem": "bdev", 00:06:50.543 "config": [ 00:06:50.543 { 00:06:50.543 "params": { 00:06:50.543 "trtype": "pcie", 00:06:50.543 "traddr": "0000:00:10.0", 00:06:50.543 "name": "Nvme0" 00:06:50.543 }, 00:06:50.543 "method": "bdev_nvme_attach_controller" 00:06:50.543 }, 00:06:50.543 { 00:06:50.543 "method": "bdev_wait_for_examine" 00:06:50.543 } 00:06:50.543 ] 00:06:50.543 } 00:06:50.543 ] 00:06:50.543 } 00:06:50.801 [2024-07-15 18:57:17.893520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.802 [2024-07-15 18:57:18.005668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.802 [2024-07-15 18:57:18.067561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.320  Copying: 60/60 [kB] (average 58 MBps) 00:06:51.320 00:06:51.320 18:57:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:51.320 18:57:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:51.320 18:57:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.320 18:57:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.320 [2024-07-15 18:57:18.482036] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:51.320 [2024-07-15 18:57:18.482153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:06:51.320 { 00:06:51.320 "subsystems": [ 00:06:51.320 { 00:06:51.320 "subsystem": "bdev", 00:06:51.320 "config": [ 00:06:51.320 { 00:06:51.320 "params": { 00:06:51.320 "trtype": "pcie", 00:06:51.320 "traddr": "0000:00:10.0", 00:06:51.320 "name": "Nvme0" 00:06:51.320 }, 00:06:51.320 "method": "bdev_nvme_attach_controller" 00:06:51.320 }, 00:06:51.320 { 00:06:51.320 "method": "bdev_wait_for_examine" 00:06:51.320 } 00:06:51.320 ] 00:06:51.320 } 00:06:51.320 ] 00:06:51.320 } 00:06:51.610 [2024-07-15 18:57:18.619599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.610 [2024-07-15 18:57:18.729544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.610 [2024-07-15 18:57:18.789807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.868  Copying: 60/60 [kB] (average 58 MBps) 00:06:51.868 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.868 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.125 [2024-07-15 18:57:19.196684] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:52.125 [2024-07-15 18:57:19.196781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62708 ] 00:06:52.125 { 00:06:52.125 "subsystems": [ 00:06:52.125 { 00:06:52.125 "subsystem": "bdev", 00:06:52.125 "config": [ 00:06:52.125 { 00:06:52.125 "params": { 00:06:52.125 "trtype": "pcie", 00:06:52.125 "traddr": "0000:00:10.0", 00:06:52.125 "name": "Nvme0" 00:06:52.125 }, 00:06:52.125 "method": "bdev_nvme_attach_controller" 00:06:52.125 }, 00:06:52.125 { 00:06:52.125 "method": "bdev_wait_for_examine" 00:06:52.125 } 00:06:52.125 ] 00:06:52.125 } 00:06:52.125 ] 00:06:52.125 } 00:06:52.125 [2024-07-15 18:57:19.338422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.383 [2024-07-15 18:57:19.454996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.383 [2024-07-15 18:57:19.517911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.640  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:52.640 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:52.640 18:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.573 18:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:53.573 18:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.573 18:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.573 18:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.573 [2024-07-15 18:57:20.575707] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:53.573 [2024-07-15 18:57:20.575802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62727 ] 00:06:53.573 { 00:06:53.573 "subsystems": [ 00:06:53.573 { 00:06:53.573 "subsystem": "bdev", 00:06:53.573 "config": [ 00:06:53.573 { 00:06:53.573 "params": { 00:06:53.573 "trtype": "pcie", 00:06:53.573 "traddr": "0000:00:10.0", 00:06:53.573 "name": "Nvme0" 00:06:53.573 }, 00:06:53.573 "method": "bdev_nvme_attach_controller" 00:06:53.573 }, 00:06:53.573 { 00:06:53.573 "method": "bdev_wait_for_examine" 00:06:53.573 } 00:06:53.573 ] 00:06:53.573 } 00:06:53.573 ] 00:06:53.573 } 00:06:53.573 [2024-07-15 18:57:20.717151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.573 [2024-07-15 18:57:20.841766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.831 [2024-07-15 18:57:20.901401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.090  Copying: 56/56 [kB] (average 54 MBps) 00:06:54.090 00:06:54.090 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:54.090 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.090 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.090 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.090 [2024-07-15 18:57:21.297180] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:54.090 [2024-07-15 18:57:21.297270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62747 ] 00:06:54.090 { 00:06:54.090 "subsystems": [ 00:06:54.090 { 00:06:54.090 "subsystem": "bdev", 00:06:54.090 "config": [ 00:06:54.090 { 00:06:54.090 "params": { 00:06:54.090 "trtype": "pcie", 00:06:54.090 "traddr": "0000:00:10.0", 00:06:54.090 "name": "Nvme0" 00:06:54.090 }, 00:06:54.090 "method": "bdev_nvme_attach_controller" 00:06:54.090 }, 00:06:54.090 { 00:06:54.090 "method": "bdev_wait_for_examine" 00:06:54.090 } 00:06:54.090 ] 00:06:54.090 } 00:06:54.090 ] 00:06:54.090 } 00:06:54.349 [2024-07-15 18:57:21.436321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.349 [2024-07-15 18:57:21.533277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.349 [2024-07-15 18:57:21.591509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.866  Copying: 56/56 [kB] (average 27 MBps) 00:06:54.866 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.866 18:57:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.866 [2024-07-15 18:57:22.013990] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:54.866 [2024-07-15 18:57:22.014088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62768 ] 00:06:54.866 { 00:06:54.866 "subsystems": [ 00:06:54.866 { 00:06:54.866 "subsystem": "bdev", 00:06:54.866 "config": [ 00:06:54.866 { 00:06:54.866 "params": { 00:06:54.866 "trtype": "pcie", 00:06:54.866 "traddr": "0000:00:10.0", 00:06:54.866 "name": "Nvme0" 00:06:54.866 }, 00:06:54.866 "method": "bdev_nvme_attach_controller" 00:06:54.866 }, 00:06:54.866 { 00:06:54.866 "method": "bdev_wait_for_examine" 00:06:54.866 } 00:06:54.866 ] 00:06:54.866 } 00:06:54.866 ] 00:06:54.866 } 00:06:54.866 [2024-07-15 18:57:22.150450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.125 [2024-07-15 18:57:22.244325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.125 [2024-07-15 18:57:22.303381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.384  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.384 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:55.384 18:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.336 18:57:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:56.336 18:57:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:56.337 18:57:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.337 18:57:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.337 { 00:06:56.337 "subsystems": [ 00:06:56.337 { 00:06:56.337 "subsystem": "bdev", 00:06:56.337 "config": [ 00:06:56.337 { 00:06:56.337 "params": { 00:06:56.337 "trtype": "pcie", 00:06:56.337 "traddr": "0000:00:10.0", 00:06:56.337 "name": "Nvme0" 00:06:56.337 }, 00:06:56.337 "method": "bdev_nvme_attach_controller" 00:06:56.337 }, 00:06:56.337 { 00:06:56.337 "method": "bdev_wait_for_examine" 00:06:56.337 } 00:06:56.337 ] 00:06:56.337 } 00:06:56.337 ] 00:06:56.337 } 00:06:56.337 [2024-07-15 18:57:23.361251] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:56.337 [2024-07-15 18:57:23.361355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:06:56.337 [2024-07-15 18:57:23.497682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.337 [2024-07-15 18:57:23.613202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.596 [2024-07-15 18:57:23.671279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.880  Copying: 56/56 [kB] (average 54 MBps) 00:06:56.880 00:06:56.880 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:56.880 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.880 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.880 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.880 { 00:06:56.880 "subsystems": [ 00:06:56.880 { 00:06:56.880 "subsystem": "bdev", 00:06:56.880 "config": [ 00:06:56.880 { 00:06:56.880 "params": { 00:06:56.880 "trtype": "pcie", 00:06:56.880 "traddr": "0000:00:10.0", 00:06:56.880 "name": "Nvme0" 00:06:56.880 }, 00:06:56.880 "method": "bdev_nvme_attach_controller" 00:06:56.880 }, 00:06:56.880 { 00:06:56.880 "method": "bdev_wait_for_examine" 00:06:56.880 } 00:06:56.880 ] 00:06:56.880 } 00:06:56.880 ] 00:06:56.880 } 00:06:56.880 [2024-07-15 18:57:24.061904] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:56.880 [2024-07-15 18:57:24.062020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62806 ] 00:06:57.164 [2024-07-15 18:57:24.198521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.164 [2024-07-15 18:57:24.308259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.164 [2024-07-15 18:57:24.361596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.423  Copying: 56/56 [kB] (average 54 MBps) 00:06:57.423 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.423 18:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:57.682 [2024-07-15 18:57:24.737296] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:57.682 [2024-07-15 18:57:24.737372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:06:57.682 { 00:06:57.682 "subsystems": [ 00:06:57.682 { 00:06:57.682 "subsystem": "bdev", 00:06:57.682 "config": [ 00:06:57.682 { 00:06:57.682 "params": { 00:06:57.682 "trtype": "pcie", 00:06:57.682 "traddr": "0000:00:10.0", 00:06:57.682 "name": "Nvme0" 00:06:57.682 }, 00:06:57.682 "method": "bdev_nvme_attach_controller" 00:06:57.682 }, 00:06:57.682 { 00:06:57.682 "method": "bdev_wait_for_examine" 00:06:57.682 } 00:06:57.682 ] 00:06:57.682 } 00:06:57.682 ] 00:06:57.682 } 00:06:57.682 [2024-07-15 18:57:24.867026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.940 [2024-07-15 18:57:24.977480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.940 [2024-07-15 18:57:25.032916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.197  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:58.197 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:58.197 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:58.762 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:58.762 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.762 18:57:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 [2024-07-15 18:57:25.923298] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:58.762 [2024-07-15 18:57:25.923961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62841 ] 00:06:58.762 { 00:06:58.762 "subsystems": [ 00:06:58.762 { 00:06:58.762 "subsystem": "bdev", 00:06:58.762 "config": [ 00:06:58.762 { 00:06:58.762 "params": { 00:06:58.762 "trtype": "pcie", 00:06:58.762 "traddr": "0000:00:10.0", 00:06:58.762 "name": "Nvme0" 00:06:58.762 }, 00:06:58.762 "method": "bdev_nvme_attach_controller" 00:06:58.762 }, 00:06:58.762 { 00:06:58.762 "method": "bdev_wait_for_examine" 00:06:58.762 } 00:06:58.762 ] 00:06:58.762 } 00:06:58.762 ] 00:06:58.762 } 00:06:59.020 [2024-07-15 18:57:26.064416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.020 [2024-07-15 18:57:26.162422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.020 [2024-07-15 18:57:26.221807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.279  Copying: 48/48 [kB] (average 46 MBps) 00:06:59.279 00:06:59.540 18:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:59.540 18:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:59.540 18:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.540 18:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.540 { 00:06:59.540 "subsystems": [ 00:06:59.540 { 00:06:59.540 "subsystem": "bdev", 00:06:59.540 "config": [ 00:06:59.540 { 00:06:59.540 "params": { 00:06:59.540 "trtype": "pcie", 00:06:59.540 "traddr": "0000:00:10.0", 00:06:59.540 "name": "Nvme0" 00:06:59.540 }, 00:06:59.540 "method": "bdev_nvme_attach_controller" 00:06:59.540 }, 00:06:59.540 { 00:06:59.540 "method": "bdev_wait_for_examine" 00:06:59.540 } 00:06:59.540 ] 00:06:59.540 } 00:06:59.540 ] 00:06:59.540 } 00:06:59.540 [2024-07-15 18:57:26.646306] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:06:59.540 [2024-07-15 18:57:26.646437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:06:59.540 [2024-07-15 18:57:26.791392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.799 [2024-07-15 18:57:26.901882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.799 [2024-07-15 18:57:26.955575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.057  Copying: 48/48 [kB] (average 46 MBps) 00:07:00.057 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.057 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.057 { 00:07:00.057 "subsystems": [ 00:07:00.057 { 00:07:00.057 "subsystem": "bdev", 00:07:00.057 "config": [ 00:07:00.057 { 00:07:00.057 "params": { 00:07:00.057 "trtype": "pcie", 00:07:00.057 "traddr": "0000:00:10.0", 00:07:00.057 "name": "Nvme0" 00:07:00.057 }, 00:07:00.057 "method": "bdev_nvme_attach_controller" 00:07:00.057 }, 00:07:00.057 { 00:07:00.057 "method": "bdev_wait_for_examine" 00:07:00.057 } 00:07:00.057 ] 00:07:00.057 } 00:07:00.057 ] 00:07:00.057 } 00:07:00.057 [2024-07-15 18:57:27.344443] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:00.057 [2024-07-15 18:57:27.344618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62875 ] 00:07:00.316 [2024-07-15 18:57:27.479745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.316 [2024-07-15 18:57:27.575082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.574 [2024-07-15 18:57:27.632521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.833  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:00.833 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:00.833 18:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.399 18:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:01.399 18:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:01.399 18:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.399 18:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.399 [2024-07-15 18:57:28.516962] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:01.399 [2024-07-15 18:57:28.517095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62894 ] 00:07:01.399 { 00:07:01.399 "subsystems": [ 00:07:01.399 { 00:07:01.399 "subsystem": "bdev", 00:07:01.399 "config": [ 00:07:01.399 { 00:07:01.399 "params": { 00:07:01.399 "trtype": "pcie", 00:07:01.399 "traddr": "0000:00:10.0", 00:07:01.399 "name": "Nvme0" 00:07:01.399 }, 00:07:01.399 "method": "bdev_nvme_attach_controller" 00:07:01.399 }, 00:07:01.399 { 00:07:01.399 "method": "bdev_wait_for_examine" 00:07:01.399 } 00:07:01.399 ] 00:07:01.399 } 00:07:01.399 ] 00:07:01.399 } 00:07:01.399 [2024-07-15 18:57:28.651975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.657 [2024-07-15 18:57:28.749784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.657 [2024-07-15 18:57:28.807600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.915  Copying: 48/48 [kB] (average 46 MBps) 00:07:01.915 00:07:01.915 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:01.915 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:01.915 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.915 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.915 [2024-07-15 18:57:29.190322] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:01.915 [2024-07-15 18:57:29.191046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62908 ] 00:07:01.915 { 00:07:01.915 "subsystems": [ 00:07:01.915 { 00:07:01.915 "subsystem": "bdev", 00:07:01.915 "config": [ 00:07:01.915 { 00:07:01.915 "params": { 00:07:01.915 "trtype": "pcie", 00:07:01.915 "traddr": "0000:00:10.0", 00:07:01.915 "name": "Nvme0" 00:07:01.915 }, 00:07:01.915 "method": "bdev_nvme_attach_controller" 00:07:01.915 }, 00:07:01.915 { 00:07:01.915 "method": "bdev_wait_for_examine" 00:07:01.915 } 00:07:01.915 ] 00:07:01.915 } 00:07:01.915 ] 00:07:01.915 } 00:07:02.188 [2024-07-15 18:57:29.328005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.188 [2024-07-15 18:57:29.422101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.451 [2024-07-15 18:57:29.478000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.709  Copying: 48/48 [kB] (average 46 MBps) 00:07:02.709 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.709 18:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.709 [2024-07-15 18:57:29.870231] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:02.709 [2024-07-15 18:57:29.870346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62923 ] 00:07:02.709 { 00:07:02.709 "subsystems": [ 00:07:02.709 { 00:07:02.709 "subsystem": "bdev", 00:07:02.709 "config": [ 00:07:02.709 { 00:07:02.709 "params": { 00:07:02.709 "trtype": "pcie", 00:07:02.709 "traddr": "0000:00:10.0", 00:07:02.709 "name": "Nvme0" 00:07:02.709 }, 00:07:02.709 "method": "bdev_nvme_attach_controller" 00:07:02.709 }, 00:07:02.709 { 00:07:02.709 "method": "bdev_wait_for_examine" 00:07:02.709 } 00:07:02.709 ] 00:07:02.709 } 00:07:02.709 ] 00:07:02.709 } 00:07:02.967 [2024-07-15 18:57:30.005723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.967 [2024-07-15 18:57:30.106853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.967 [2024-07-15 18:57:30.160451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.225  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.225 00:07:03.225 00:07:03.225 real 0m16.304s 00:07:03.225 user 0m12.072s 00:07:03.225 sys 0m5.892s 00:07:03.225 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.225 ************************************ 00:07:03.225 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.225 END TEST dd_rw 00:07:03.225 ************************************ 00:07:03.483 18:57:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.484 ************************************ 00:07:03.484 START TEST dd_rw_offset 00:07:03.484 ************************************ 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=b41oe5se4k0ve8mfxx52eimblnkk8h03onpjy6zsipnk8qqsxq4a2xg8idk81q8si74sehfvbva3lxtrenjrwq6agt4o37vk2mywz3b6n7oovnfc066l1dbsdmnzbl6wpbn4fg447zmfqlphnhpq48mynn0fvp9b248ub4kkp5yne7qyvopune6tbdw6u03cdaphrpr4roj3augnk8rhpv9xl7x049otkn5zs3eufu0cigi0jcqh9bs2khc0fb99aycafkua5hehdiycug2i1uve9br1dwf01jxtgtakl14bpm6rdl2ygbvlp82qg2od07wmp4a5eumghvpdfvm6amj3ugki4dzof51lsm67uossg6gah7qx7dxra0ih6ry01cz538g2a5c79ve4v05cfi83l9t9hem3lgr9qzby1c1jm1ojyfukhas9n440e99ulbltp0z6s75975myb0suwlhxxv6gxu5ljaqc2m2kikcw5a9ep0x8e6ctp4bu1apajwq7u7xygrgztsxu5lc5v9nkyb2l42z4heelofpg9ttm8hg6kvexj2qkmpsgen6nhxm335tssjurdwo3pmyvm6vxjtxcall4d462m5kzef2aqdhgzitnmhmc59hdhvc4hh0cnm94z87rmevnmp2xs4mdae302e0agjjgd3r3qpfubzxn7rv76ij9s16jb8330z1s90wjril9tnqyhudyf5843mej49756bih1snesesto5fj2ysf2hdhgf3b3lhzlgvdfpyodxvy4e5byaows9adefj2ndel2kh8n9msshb4zpv4jgw56i841num4b344lpz14kp0hvhyfek0lzdfgafeona2jzg4nshr4gpk1jwntun065m3o00lijelsttan5zyhfa9mgnavqhgb6f6y35rnhhtnktq8t0xkgzactpelsqwluq4u0sd8n3213whdqvts8hqccn9prz98rcp6u2wmuesikjo76ogovwsnfbzn6p8hu1lv87h8kqqu7yzidkaj4k3scngfm0u0pnkj1l6dybczpumu8sbmght3s88afcv4otsz5fr32a4c62l3p0qq3xdh24o7jm7u6tsbkwkuai2oclc8uju36wzo7gcfp413i3066e35womhzta0661ptz12mgfatuqj49t0mx3w48iv61tgokj8d209a6xzvo04904lxcv69n6lnfajn4fzl9hti2onsmcu2ie8x5y4ynpaanby588b2s7ncrdzqfy0q5kbbhrj1dplfyvwe2n9narpumu9z6p9c8ye9pu2qo6oehl9a60egfqml51r5dylwi1yax1vapjn69tzh2jjr4q9xw402s4zdd889m0296o9226ttruxwiiplkge80723gcecssq80gu4by72q5c35za7wayhubu7t9t0xf7rxqx0akvcdsry2rfd46bamz8hypgrd5ty9hc00xuaii2si263y42ko9ejwmq4sqqpawabr5e1cgmwj19isdkgn5idll51cbof68t0bpzsi5kuvispvkvxp7b1h0vulgcdiaq3l08tdu5yves0ak42oyfbk8rnjc9mqarb3w3xb72cerug006tyhchs59otn3nll93wwmqikj7l3szau55dbuhwntobi4b3iovjab3z6en00xgkdpxxj94ax8bmtg4x9kw9v51bzc041c220gpbg945lmimd38xmvrblyo3v1i4kpno1o0cln5uumrzegrtgx3la7uqixobmrjhir40tjmvu6583h1uter6yuco9h6cybjra0tv6axhozgyj9s55oo06f72ir6eal89gbyaj6sgx67s972259ndct8o577gpnjg76fm1bba90ih96l714phcbkgg7hfhpl22z36ddyv7j4gmj2pcfbvb2qwqgvcnbrvo7nkup11bm87m0k4p5bcw2matv8eyx142bowikjol7w6e1vj6eusop14l87jo8l8u7z33tl0qmb7nyzre82e3js4s9ymrd5y7l5ozb8uthrn22bndv8ln8p3gidwb0qagjon7o0tn7irf9m1y0mgmnvu0xcxc96y1271oaixtmj792qprastprkixunf7dmftjewinng0gwtxfyk56kq5hg6kwvayw2765o4lsg1ageazybpxatme2mpls54hgyjimjy4m04piiv4wy11vp5i88nq6b4l5o0xojjg1bsgnqzydsjkzu43tasn4poe1ezc4xtioi1lw8jbgzxb1fhvpig3dkzsift61r5wm2tgdlxc0jx0xev20gxktykyf7ac3f7m7fqqu7yr651njovkg3j3k8w4ji7akvoaiv3sflzdfo3znzc18o8vum5pqzk2zalt9vkshlxdg886q1lqrxgj2yw5tdk7jvo4ybw8d6k0jr3zzvw0si9guibscr3k98v3pl3o7brqf1egjwe39lhiyr0wi23gpdy7biqwi8xgdan8rzgegrge6f2l1vtyirgl7nfvj3ek5kpoaewt2pdt4gzp6n1x0h4pdneed2hpb1a7asf89zjzo9e73j7n4t2tvh9i5prpsizv2jz0340r9tn6rb8o880d8sa0w85efqr5a6ze382rfjqi59yedmp6w4kdtki7rxrp7rfq5hgnp6rr2kv7nfk3ejmem7vhonp4rim06ebj9flm2ryqkpbhixr3rq30r6b4xyoogbgc0r917zt24w3neit2dyldph8c1mf24p51vuy4ekftlnfxsik9pxa0lq53mu7lm0snxf0txhmt50tqqy3jdpfv7oitjy7dtmzxnaval8bwou8ybbmp201dxtrs0aptv6n5fq1411oc3fs7xyga55u9folzichuf1lt2ihjgyhlv4h1mhknui3gktsiwr1hpix58yecchso5eq2xd4wq5zmbm63wl0q8yb71we34x93we2l75j1zwybb2zh0qib5dlqrdm884nkqus7gkqynsoj3b86ettlxhsx7vowebidfwwcgal97xifb2875rir9vijii1g7ovegw8xm6ddsbl4jy5j82ipd9axommpyp6ifjh1ns5lb15i55fjg149k0jh17fvo5fh5o74w8udroh7bnvmykg99l4amho3p0tmutdoenn01nmi930kzlr46ahj0g30ha9ziuvyo3o6l388rsg0baa0rduka2q4uvc8oh3q8ihjmvybwregkr5mpvmtqcp6yfua9z0hylu4mcjr3lly61dvw6ek9dxifvvzaqy52e3hjxmy4me6tn35ohi2z2evi67c8r9zh2puqyojqapn9nphddtfmzhs7ir57cqifxrkr0ao7jh46ptdtk501nk5g8nlbcga7ktqkbkb41q6q32qphzebse71tb9h2nklrmkwt81555l4v3amg35x73l3g16u6g0nnogpxrzgkgy2t0ejsaraokq0z6odfth809lyqohwdsr1hh3a1tqfyz66c1j9wmgyci7afffq5coionopfb1x1r4a7fznkdbw0hq9lns314uyzq7az3qridzxk91cv5dgzsp82r1qgevnonu1v02lh84fnq4i2ukg9z7im0duxic17ojo4w22r5y9zo2sjgg7dsrog71yc9joqx9h85qkcrz337hframpy9hdm21oqre2mw475beggdmixrv4wxeje6rujsioo6wjz2i144v6fqnuxpc40tlb4mb7d9lzbgyighs69zgd6e29ahjlcwux6xdwfrqux6lsusqhk5n2qxlntpeczyh0c2xjqmx0h398gyekq5qkpdiuemq27e1cnvt5m6wdd8e4uo38va4d9w4ela1ayj9g569vac6o7sjb5eoriy387r0f0vqdz0pj71xgzoz9dlcyf8mxsqd28c92o09wsigv9lw9wmzimo24eii3e5xf1oaysw9aigqpzzxg7d4ytce79ut18trk8hiq9wcfgx48po3rd39i04mdx13yls5qo6i5jjwwws09rh9eur9l2zestq5008n96pf1usdn7xc1oduznenno322fg6g12hiz875qm25tprfdceh4lv8jqsq6m 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:03.484 18:57:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:03.484 [2024-07-15 18:57:30.632713] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:03.484 [2024-07-15 18:57:30.632811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:07:03.484 { 00:07:03.484 "subsystems": [ 00:07:03.484 { 00:07:03.484 "subsystem": "bdev", 00:07:03.484 "config": [ 00:07:03.484 { 00:07:03.484 "params": { 00:07:03.484 "trtype": "pcie", 00:07:03.484 "traddr": "0000:00:10.0", 00:07:03.484 "name": "Nvme0" 00:07:03.484 }, 00:07:03.484 "method": "bdev_nvme_attach_controller" 00:07:03.484 }, 00:07:03.484 { 00:07:03.484 "method": "bdev_wait_for_examine" 00:07:03.484 } 00:07:03.484 ] 00:07:03.484 } 00:07:03.484 ] 00:07:03.484 } 00:07:03.484 [2024-07-15 18:57:30.771943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.743 [2024-07-15 18:57:30.873842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.743 [2024-07-15 18:57:30.934559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.002  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.002 00:07:04.002 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:04.002 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:04.002 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:04.002 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:04.260 [2024-07-15 18:57:31.301638] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:04.260 [2024-07-15 18:57:31.301714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62973 ] 00:07:04.260 { 00:07:04.260 "subsystems": [ 00:07:04.260 { 00:07:04.260 "subsystem": "bdev", 00:07:04.260 "config": [ 00:07:04.260 { 00:07:04.260 "params": { 00:07:04.260 "trtype": "pcie", 00:07:04.260 "traddr": "0000:00:10.0", 00:07:04.260 "name": "Nvme0" 00:07:04.260 }, 00:07:04.260 "method": "bdev_nvme_attach_controller" 00:07:04.260 }, 00:07:04.260 { 00:07:04.260 "method": "bdev_wait_for_examine" 00:07:04.260 } 00:07:04.260 ] 00:07:04.260 } 00:07:04.260 ] 00:07:04.260 } 00:07:04.260 [2024-07-15 18:57:31.432366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.260 [2024-07-15 18:57:31.536167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.519 [2024-07-15 18:57:31.605437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.778  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:04.778 00:07:04.778 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ b41oe5se4k0ve8mfxx52eimblnkk8h03onpjy6zsipnk8qqsxq4a2xg8idk81q8si74sehfvbva3lxtrenjrwq6agt4o37vk2mywz3b6n7oovnfc066l1dbsdmnzbl6wpbn4fg447zmfqlphnhpq48mynn0fvp9b248ub4kkp5yne7qyvopune6tbdw6u03cdaphrpr4roj3augnk8rhpv9xl7x049otkn5zs3eufu0cigi0jcqh9bs2khc0fb99aycafkua5hehdiycug2i1uve9br1dwf01jxtgtakl14bpm6rdl2ygbvlp82qg2od07wmp4a5eumghvpdfvm6amj3ugki4dzof51lsm67uossg6gah7qx7dxra0ih6ry01cz538g2a5c79ve4v05cfi83l9t9hem3lgr9qzby1c1jm1ojyfukhas9n440e99ulbltp0z6s75975myb0suwlhxxv6gxu5ljaqc2m2kikcw5a9ep0x8e6ctp4bu1apajwq7u7xygrgztsxu5lc5v9nkyb2l42z4heelofpg9ttm8hg6kvexj2qkmpsgen6nhxm335tssjurdwo3pmyvm6vxjtxcall4d462m5kzef2aqdhgzitnmhmc59hdhvc4hh0cnm94z87rmevnmp2xs4mdae302e0agjjgd3r3qpfubzxn7rv76ij9s16jb8330z1s90wjril9tnqyhudyf5843mej49756bih1snesesto5fj2ysf2hdhgf3b3lhzlgvdfpyodxvy4e5byaows9adefj2ndel2kh8n9msshb4zpv4jgw56i841num4b344lpz14kp0hvhyfek0lzdfgafeona2jzg4nshr4gpk1jwntun065m3o00lijelsttan5zyhfa9mgnavqhgb6f6y35rnhhtnktq8t0xkgzactpelsqwluq4u0sd8n3213whdqvts8hqccn9prz98rcp6u2wmuesikjo76ogovwsnfbzn6p8hu1lv87h8kqqu7yzidkaj4k3scngfm0u0pnkj1l6dybczpumu8sbmght3s88afcv4otsz5fr32a4c62l3p0qq3xdh24o7jm7u6tsbkwkuai2oclc8uju36wzo7gcfp413i3066e35womhzta0661ptz12mgfatuqj49t0mx3w48iv61tgokj8d209a6xzvo04904lxcv69n6lnfajn4fzl9hti2onsmcu2ie8x5y4ynpaanby588b2s7ncrdzqfy0q5kbbhrj1dplfyvwe2n9narpumu9z6p9c8ye9pu2qo6oehl9a60egfqml51r5dylwi1yax1vapjn69tzh2jjr4q9xw402s4zdd889m0296o9226ttruxwiiplkge80723gcecssq80gu4by72q5c35za7wayhubu7t9t0xf7rxqx0akvcdsry2rfd46bamz8hypgrd5ty9hc00xuaii2si263y42ko9ejwmq4sqqpawabr5e1cgmwj19isdkgn5idll51cbof68t0bpzsi5kuvispvkvxp7b1h0vulgcdiaq3l08tdu5yves0ak42oyfbk8rnjc9mqarb3w3xb72cerug006tyhchs59otn3nll93wwmqikj7l3szau55dbuhwntobi4b3iovjab3z6en00xgkdpxxj94ax8bmtg4x9kw9v51bzc041c220gpbg945lmimd38xmvrblyo3v1i4kpno1o0cln5uumrzegrtgx3la7uqixobmrjhir40tjmvu6583h1uter6yuco9h6cybjra0tv6axhozgyj9s55oo06f72ir6eal89gbyaj6sgx67s972259ndct8o577gpnjg76fm1bba90ih96l714phcbkgg7hfhpl22z36ddyv7j4gmj2pcfbvb2qwqgvcnbrvo7nkup11bm87m0k4p5bcw2matv8eyx142bowikjol7w6e1vj6eusop14l87jo8l8u7z33tl0qmb7nyzre82e3js4s9ymrd5y7l5ozb8uthrn22bndv8ln8p3gidwb0qagjon7o0tn7irf9m1y0mgmnvu0xcxc96y1271oaixtmj792qprastprkixunf7dmftjewinng0gwtxfyk56kq5hg6kwvayw2765o4lsg1ageazybpxatme2mpls54hgyjimjy4m04piiv4wy11vp5i88nq6b4l5o0xojjg1bsgnqzydsjkzu43tasn4poe1ezc4xtioi1lw8jbgzxb1fhvpig3dkzsift61r5wm2tgdlxc0jx0xev20gxktykyf7ac3f7m7fqqu7yr651njovkg3j3k8w4ji7akvoaiv3sflzdfo3znzc18o8vum5pqzk2zalt9vkshlxdg886q1lqrxgj2yw5tdk7jvo4ybw8d6k0jr3zzvw0si9guibscr3k98v3pl3o7brqf1egjwe39lhiyr0wi23gpdy7biqwi8xgdan8rzgegrge6f2l1vtyirgl7nfvj3ek5kpoaewt2pdt4gzp6n1x0h4pdneed2hpb1a7asf89zjzo9e73j7n4t2tvh9i5prpsizv2jz0340r9tn6rb8o880d8sa0w85efqr5a6ze382rfjqi59yedmp6w4kdtki7rxrp7rfq5hgnp6rr2kv7nfk3ejmem7vhonp4rim06ebj9flm2ryqkpbhixr3rq30r6b4xyoogbgc0r917zt24w3neit2dyldph8c1mf24p51vuy4ekftlnfxsik9pxa0lq53mu7lm0snxf0txhmt50tqqy3jdpfv7oitjy7dtmzxnaval8bwou8ybbmp201dxtrs0aptv6n5fq1411oc3fs7xyga55u9folzichuf1lt2ihjgyhlv4h1mhknui3gktsiwr1hpix58yecchso5eq2xd4wq5zmbm63wl0q8yb71we34x93we2l75j1zwybb2zh0qib5dlqrdm884nkqus7gkqynsoj3b86ettlxhsx7vowebidfwwcgal97xifb2875rir9vijii1g7ovegw8xm6ddsbl4jy5j82ipd9axommpyp6ifjh1ns5lb15i55fjg149k0jh17fvo5fh5o74w8udroh7bnvmykg99l4amho3p0tmutdoenn01nmi930kzlr46ahj0g30ha9ziuvyo3o6l388rsg0baa0rduka2q4uvc8oh3q8ihjmvybwregkr5mpvmtqcp6yfua9z0hylu4mcjr3lly61dvw6ek9dxifvvzaqy52e3hjxmy4me6tn35ohi2z2evi67c8r9zh2puqyojqapn9nphddtfmzhs7ir57cqifxrkr0ao7jh46ptdtk501nk5g8nlbcga7ktqkbkb41q6q32qphzebse71tb9h2nklrmkwt81555l4v3amg35x73l3g16u6g0nnogpxrzgkgy2t0ejsaraokq0z6odfth809lyqohwdsr1hh3a1tqfyz66c1j9wmgyci7afffq5coionopfb1x1r4a7fznkdbw0hq9lns314uyzq7az3qridzxk91cv5dgzsp82r1qgevnonu1v02lh84fnq4i2ukg9z7im0duxic17ojo4w22r5y9zo2sjgg7dsrog71yc9joqx9h85qkcrz337hframpy9hdm21oqre2mw475beggdmixrv4wxeje6rujsioo6wjz2i144v6fqnuxpc40tlb4mb7d9lzbgyighs69zgd6e29ahjlcwux6xdwfrqux6lsusqhk5n2qxlntpeczyh0c2xjqmx0h398gyekq5qkpdiuemq27e1cnvt5m6wdd8e4uo38va4d9w4ela1ayj9g569vac6o7sjb5eoriy387r0f0vqdz0pj71xgzoz9dlcyf8mxsqd28c92o09wsigv9lw9wmzimo24eii3e5xf1oaysw9aigqpzzxg7d4ytce79ut18trk8hiq9wcfgx48po3rd39i04mdx13yls5qo6i5jjwwws09rh9eur9l2zestq5008n96pf1usdn7xc1oduznenno322fg6g12hiz875qm25tprfdceh4lv8jqsq6m == \b\4\1\o\e\5\s\e\4\k\0\v\e\8\m\f\x\x\5\2\e\i\m\b\l\n\k\k\8\h\0\3\o\n\p\j\y\6\z\s\i\p\n\k\8\q\q\s\x\q\4\a\2\x\g\8\i\d\k\8\1\q\8\s\i\7\4\s\e\h\f\v\b\v\a\3\l\x\t\r\e\n\j\r\w\q\6\a\g\t\4\o\3\7\v\k\2\m\y\w\z\3\b\6\n\7\o\o\v\n\f\c\0\6\6\l\1\d\b\s\d\m\n\z\b\l\6\w\p\b\n\4\f\g\4\4\7\z\m\f\q\l\p\h\n\h\p\q\4\8\m\y\n\n\0\f\v\p\9\b\2\4\8\u\b\4\k\k\p\5\y\n\e\7\q\y\v\o\p\u\n\e\6\t\b\d\w\6\u\0\3\c\d\a\p\h\r\p\r\4\r\o\j\3\a\u\g\n\k\8\r\h\p\v\9\x\l\7\x\0\4\9\o\t\k\n\5\z\s\3\e\u\f\u\0\c\i\g\i\0\j\c\q\h\9\b\s\2\k\h\c\0\f\b\9\9\a\y\c\a\f\k\u\a\5\h\e\h\d\i\y\c\u\g\2\i\1\u\v\e\9\b\r\1\d\w\f\0\1\j\x\t\g\t\a\k\l\1\4\b\p\m\6\r\d\l\2\y\g\b\v\l\p\8\2\q\g\2\o\d\0\7\w\m\p\4\a\5\e\u\m\g\h\v\p\d\f\v\m\6\a\m\j\3\u\g\k\i\4\d\z\o\f\5\1\l\s\m\6\7\u\o\s\s\g\6\g\a\h\7\q\x\7\d\x\r\a\0\i\h\6\r\y\0\1\c\z\5\3\8\g\2\a\5\c\7\9\v\e\4\v\0\5\c\f\i\8\3\l\9\t\9\h\e\m\3\l\g\r\9\q\z\b\y\1\c\1\j\m\1\o\j\y\f\u\k\h\a\s\9\n\4\4\0\e\9\9\u\l\b\l\t\p\0\z\6\s\7\5\9\7\5\m\y\b\0\s\u\w\l\h\x\x\v\6\g\x\u\5\l\j\a\q\c\2\m\2\k\i\k\c\w\5\a\9\e\p\0\x\8\e\6\c\t\p\4\b\u\1\a\p\a\j\w\q\7\u\7\x\y\g\r\g\z\t\s\x\u\5\l\c\5\v\9\n\k\y\b\2\l\4\2\z\4\h\e\e\l\o\f\p\g\9\t\t\m\8\h\g\6\k\v\e\x\j\2\q\k\m\p\s\g\e\n\6\n\h\x\m\3\3\5\t\s\s\j\u\r\d\w\o\3\p\m\y\v\m\6\v\x\j\t\x\c\a\l\l\4\d\4\6\2\m\5\k\z\e\f\2\a\q\d\h\g\z\i\t\n\m\h\m\c\5\9\h\d\h\v\c\4\h\h\0\c\n\m\9\4\z\8\7\r\m\e\v\n\m\p\2\x\s\4\m\d\a\e\3\0\2\e\0\a\g\j\j\g\d\3\r\3\q\p\f\u\b\z\x\n\7\r\v\7\6\i\j\9\s\1\6\j\b\8\3\3\0\z\1\s\9\0\w\j\r\i\l\9\t\n\q\y\h\u\d\y\f\5\8\4\3\m\e\j\4\9\7\5\6\b\i\h\1\s\n\e\s\e\s\t\o\5\f\j\2\y\s\f\2\h\d\h\g\f\3\b\3\l\h\z\l\g\v\d\f\p\y\o\d\x\v\y\4\e\5\b\y\a\o\w\s\9\a\d\e\f\j\2\n\d\e\l\2\k\h\8\n\9\m\s\s\h\b\4\z\p\v\4\j\g\w\5\6\i\8\4\1\n\u\m\4\b\3\4\4\l\p\z\1\4\k\p\0\h\v\h\y\f\e\k\0\l\z\d\f\g\a\f\e\o\n\a\2\j\z\g\4\n\s\h\r\4\g\p\k\1\j\w\n\t\u\n\0\6\5\m\3\o\0\0\l\i\j\e\l\s\t\t\a\n\5\z\y\h\f\a\9\m\g\n\a\v\q\h\g\b\6\f\6\y\3\5\r\n\h\h\t\n\k\t\q\8\t\0\x\k\g\z\a\c\t\p\e\l\s\q\w\l\u\q\4\u\0\s\d\8\n\3\2\1\3\w\h\d\q\v\t\s\8\h\q\c\c\n\9\p\r\z\9\8\r\c\p\6\u\2\w\m\u\e\s\i\k\j\o\7\6\o\g\o\v\w\s\n\f\b\z\n\6\p\8\h\u\1\l\v\8\7\h\8\k\q\q\u\7\y\z\i\d\k\a\j\4\k\3\s\c\n\g\f\m\0\u\0\p\n\k\j\1\l\6\d\y\b\c\z\p\u\m\u\8\s\b\m\g\h\t\3\s\8\8\a\f\c\v\4\o\t\s\z\5\f\r\3\2\a\4\c\6\2\l\3\p\0\q\q\3\x\d\h\2\4\o\7\j\m\7\u\6\t\s\b\k\w\k\u\a\i\2\o\c\l\c\8\u\j\u\3\6\w\z\o\7\g\c\f\p\4\1\3\i\3\0\6\6\e\3\5\w\o\m\h\z\t\a\0\6\6\1\p\t\z\1\2\m\g\f\a\t\u\q\j\4\9\t\0\m\x\3\w\4\8\i\v\6\1\t\g\o\k\j\8\d\2\0\9\a\6\x\z\v\o\0\4\9\0\4\l\x\c\v\6\9\n\6\l\n\f\a\j\n\4\f\z\l\9\h\t\i\2\o\n\s\m\c\u\2\i\e\8\x\5\y\4\y\n\p\a\a\n\b\y\5\8\8\b\2\s\7\n\c\r\d\z\q\f\y\0\q\5\k\b\b\h\r\j\1\d\p\l\f\y\v\w\e\2\n\9\n\a\r\p\u\m\u\9\z\6\p\9\c\8\y\e\9\p\u\2\q\o\6\o\e\h\l\9\a\6\0\e\g\f\q\m\l\5\1\r\5\d\y\l\w\i\1\y\a\x\1\v\a\p\j\n\6\9\t\z\h\2\j\j\r\4\q\9\x\w\4\0\2\s\4\z\d\d\8\8\9\m\0\2\9\6\o\9\2\2\6\t\t\r\u\x\w\i\i\p\l\k\g\e\8\0\7\2\3\g\c\e\c\s\s\q\8\0\g\u\4\b\y\7\2\q\5\c\3\5\z\a\7\w\a\y\h\u\b\u\7\t\9\t\0\x\f\7\r\x\q\x\0\a\k\v\c\d\s\r\y\2\r\f\d\4\6\b\a\m\z\8\h\y\p\g\r\d\5\t\y\9\h\c\0\0\x\u\a\i\i\2\s\i\2\6\3\y\4\2\k\o\9\e\j\w\m\q\4\s\q\q\p\a\w\a\b\r\5\e\1\c\g\m\w\j\1\9\i\s\d\k\g\n\5\i\d\l\l\5\1\c\b\o\f\6\8\t\0\b\p\z\s\i\5\k\u\v\i\s\p\v\k\v\x\p\7\b\1\h\0\v\u\l\g\c\d\i\a\q\3\l\0\8\t\d\u\5\y\v\e\s\0\a\k\4\2\o\y\f\b\k\8\r\n\j\c\9\m\q\a\r\b\3\w\3\x\b\7\2\c\e\r\u\g\0\0\6\t\y\h\c\h\s\5\9\o\t\n\3\n\l\l\9\3\w\w\m\q\i\k\j\7\l\3\s\z\a\u\5\5\d\b\u\h\w\n\t\o\b\i\4\b\3\i\o\v\j\a\b\3\z\6\e\n\0\0\x\g\k\d\p\x\x\j\9\4\a\x\8\b\m\t\g\4\x\9\k\w\9\v\5\1\b\z\c\0\4\1\c\2\2\0\g\p\b\g\9\4\5\l\m\i\m\d\3\8\x\m\v\r\b\l\y\o\3\v\1\i\4\k\p\n\o\1\o\0\c\l\n\5\u\u\m\r\z\e\g\r\t\g\x\3\l\a\7\u\q\i\x\o\b\m\r\j\h\i\r\4\0\t\j\m\v\u\6\5\8\3\h\1\u\t\e\r\6\y\u\c\o\9\h\6\c\y\b\j\r\a\0\t\v\6\a\x\h\o\z\g\y\j\9\s\5\5\o\o\0\6\f\7\2\i\r\6\e\a\l\8\9\g\b\y\a\j\6\s\g\x\6\7\s\9\7\2\2\5\9\n\d\c\t\8\o\5\7\7\g\p\n\j\g\7\6\f\m\1\b\b\a\9\0\i\h\9\6\l\7\1\4\p\h\c\b\k\g\g\7\h\f\h\p\l\2\2\z\3\6\d\d\y\v\7\j\4\g\m\j\2\p\c\f\b\v\b\2\q\w\q\g\v\c\n\b\r\v\o\7\n\k\u\p\1\1\b\m\8\7\m\0\k\4\p\5\b\c\w\2\m\a\t\v\8\e\y\x\1\4\2\b\o\w\i\k\j\o\l\7\w\6\e\1\v\j\6\e\u\s\o\p\1\4\l\8\7\j\o\8\l\8\u\7\z\3\3\t\l\0\q\m\b\7\n\y\z\r\e\8\2\e\3\j\s\4\s\9\y\m\r\d\5\y\7\l\5\o\z\b\8\u\t\h\r\n\2\2\b\n\d\v\8\l\n\8\p\3\g\i\d\w\b\0\q\a\g\j\o\n\7\o\0\t\n\7\i\r\f\9\m\1\y\0\m\g\m\n\v\u\0\x\c\x\c\9\6\y\1\2\7\1\o\a\i\x\t\m\j\7\9\2\q\p\r\a\s\t\p\r\k\i\x\u\n\f\7\d\m\f\t\j\e\w\i\n\n\g\0\g\w\t\x\f\y\k\5\6\k\q\5\h\g\6\k\w\v\a\y\w\2\7\6\5\o\4\l\s\g\1\a\g\e\a\z\y\b\p\x\a\t\m\e\2\m\p\l\s\5\4\h\g\y\j\i\m\j\y\4\m\0\4\p\i\i\v\4\w\y\1\1\v\p\5\i\8\8\n\q\6\b\4\l\5\o\0\x\o\j\j\g\1\b\s\g\n\q\z\y\d\s\j\k\z\u\4\3\t\a\s\n\4\p\o\e\1\e\z\c\4\x\t\i\o\i\1\l\w\8\j\b\g\z\x\b\1\f\h\v\p\i\g\3\d\k\z\s\i\f\t\6\1\r\5\w\m\2\t\g\d\l\x\c\0\j\x\0\x\e\v\2\0\g\x\k\t\y\k\y\f\7\a\c\3\f\7\m\7\f\q\q\u\7\y\r\6\5\1\n\j\o\v\k\g\3\j\3\k\8\w\4\j\i\7\a\k\v\o\a\i\v\3\s\f\l\z\d\f\o\3\z\n\z\c\1\8\o\8\v\u\m\5\p\q\z\k\2\z\a\l\t\9\v\k\s\h\l\x\d\g\8\8\6\q\1\l\q\r\x\g\j\2\y\w\5\t\d\k\7\j\v\o\4\y\b\w\8\d\6\k\0\j\r\3\z\z\v\w\0\s\i\9\g\u\i\b\s\c\r\3\k\9\8\v\3\p\l\3\o\7\b\r\q\f\1\e\g\j\w\e\3\9\l\h\i\y\r\0\w\i\2\3\g\p\d\y\7\b\i\q\w\i\8\x\g\d\a\n\8\r\z\g\e\g\r\g\e\6\f\2\l\1\v\t\y\i\r\g\l\7\n\f\v\j\3\e\k\5\k\p\o\a\e\w\t\2\p\d\t\4\g\z\p\6\n\1\x\0\h\4\p\d\n\e\e\d\2\h\p\b\1\a\7\a\s\f\8\9\z\j\z\o\9\e\7\3\j\7\n\4\t\2\t\v\h\9\i\5\p\r\p\s\i\z\v\2\j\z\0\3\4\0\r\9\t\n\6\r\b\8\o\8\8\0\d\8\s\a\0\w\8\5\e\f\q\r\5\a\6\z\e\3\8\2\r\f\j\q\i\5\9\y\e\d\m\p\6\w\4\k\d\t\k\i\7\r\x\r\p\7\r\f\q\5\h\g\n\p\6\r\r\2\k\v\7\n\f\k\3\e\j\m\e\m\7\v\h\o\n\p\4\r\i\m\0\6\e\b\j\9\f\l\m\2\r\y\q\k\p\b\h\i\x\r\3\r\q\3\0\r\6\b\4\x\y\o\o\g\b\g\c\0\r\9\1\7\z\t\2\4\w\3\n\e\i\t\2\d\y\l\d\p\h\8\c\1\m\f\2\4\p\5\1\v\u\y\4\e\k\f\t\l\n\f\x\s\i\k\9\p\x\a\0\l\q\5\3\m\u\7\l\m\0\s\n\x\f\0\t\x\h\m\t\5\0\t\q\q\y\3\j\d\p\f\v\7\o\i\t\j\y\7\d\t\m\z\x\n\a\v\a\l\8\b\w\o\u\8\y\b\b\m\p\2\0\1\d\x\t\r\s\0\a\p\t\v\6\n\5\f\q\1\4\1\1\o\c\3\f\s\7\x\y\g\a\5\5\u\9\f\o\l\z\i\c\h\u\f\1\l\t\2\i\h\j\g\y\h\l\v\4\h\1\m\h\k\n\u\i\3\g\k\t\s\i\w\r\1\h\p\i\x\5\8\y\e\c\c\h\s\o\5\e\q\2\x\d\4\w\q\5\z\m\b\m\6\3\w\l\0\q\8\y\b\7\1\w\e\3\4\x\9\3\w\e\2\l\7\5\j\1\z\w\y\b\b\2\z\h\0\q\i\b\5\d\l\q\r\d\m\8\8\4\n\k\q\u\s\7\g\k\q\y\n\s\o\j\3\b\8\6\e\t\t\l\x\h\s\x\7\v\o\w\e\b\i\d\f\w\w\c\g\a\l\9\7\x\i\f\b\2\8\7\5\r\i\r\9\v\i\j\i\i\1\g\7\o\v\e\g\w\8\x\m\6\d\d\s\b\l\4\j\y\5\j\8\2\i\p\d\9\a\x\o\m\m\p\y\p\6\i\f\j\h\1\n\s\5\l\b\1\5\i\5\5\f\j\g\1\4\9\k\0\j\h\1\7\f\v\o\5\f\h\5\o\7\4\w\8\u\d\r\o\h\7\b\n\v\m\y\k\g\9\9\l\4\a\m\h\o\3\p\0\t\m\u\t\d\o\e\n\n\0\1\n\m\i\9\3\0\k\z\l\r\4\6\a\h\j\0\g\3\0\h\a\9\z\i\u\v\y\o\3\o\6\l\3\8\8\r\s\g\0\b\a\a\0\r\d\u\k\a\2\q\4\u\v\c\8\o\h\3\q\8\i\h\j\m\v\y\b\w\r\e\g\k\r\5\m\p\v\m\t\q\c\p\6\y\f\u\a\9\z\0\h\y\l\u\4\m\c\j\r\3\l\l\y\6\1\d\v\w\6\e\k\9\d\x\i\f\v\v\z\a\q\y\5\2\e\3\h\j\x\m\y\4\m\e\6\t\n\3\5\o\h\i\2\z\2\e\v\i\6\7\c\8\r\9\z\h\2\p\u\q\y\o\j\q\a\p\n\9\n\p\h\d\d\t\f\m\z\h\s\7\i\r\5\7\c\q\i\f\x\r\k\r\0\a\o\7\j\h\4\6\p\t\d\t\k\5\0\1\n\k\5\g\8\n\l\b\c\g\a\7\k\t\q\k\b\k\b\4\1\q\6\q\3\2\q\p\h\z\e\b\s\e\7\1\t\b\9\h\2\n\k\l\r\m\k\w\t\8\1\5\5\5\l\4\v\3\a\m\g\3\5\x\7\3\l\3\g\1\6\u\6\g\0\n\n\o\g\p\x\r\z\g\k\g\y\2\t\0\e\j\s\a\r\a\o\k\q\0\z\6\o\d\f\t\h\8\0\9\l\y\q\o\h\w\d\s\r\1\h\h\3\a\1\t\q\f\y\z\6\6\c\1\j\9\w\m\g\y\c\i\7\a\f\f\f\q\5\c\o\i\o\n\o\p\f\b\1\x\1\r\4\a\7\f\z\n\k\d\b\w\0\h\q\9\l\n\s\3\1\4\u\y\z\q\7\a\z\3\q\r\i\d\z\x\k\9\1\c\v\5\d\g\z\s\p\8\2\r\1\q\g\e\v\n\o\n\u\1\v\0\2\l\h\8\4\f\n\q\4\i\2\u\k\g\9\z\7\i\m\0\d\u\x\i\c\1\7\o\j\o\4\w\2\2\r\5\y\9\z\o\2\s\j\g\g\7\d\s\r\o\g\7\1\y\c\9\j\o\q\x\9\h\8\5\q\k\c\r\z\3\3\7\h\f\r\a\m\p\y\9\h\d\m\2\1\o\q\r\e\2\m\w\4\7\5\b\e\g\g\d\m\i\x\r\v\4\w\x\e\j\e\6\r\u\j\s\i\o\o\6\w\j\z\2\i\1\4\4\v\6\f\q\n\u\x\p\c\4\0\t\l\b\4\m\b\7\d\9\l\z\b\g\y\i\g\h\s\6\9\z\g\d\6\e\2\9\a\h\j\l\c\w\u\x\6\x\d\w\f\r\q\u\x\6\l\s\u\s\q\h\k\5\n\2\q\x\l\n\t\p\e\c\z\y\h\0\c\2\x\j\q\m\x\0\h\3\9\8\g\y\e\k\q\5\q\k\p\d\i\u\e\m\q\2\7\e\1\c\n\v\t\5\m\6\w\d\d\8\e\4\u\o\3\8\v\a\4\d\9\w\4\e\l\a\1\a\y\j\9\g\5\6\9\v\a\c\6\o\7\s\j\b\5\e\o\r\i\y\3\8\7\r\0\f\0\v\q\d\z\0\p\j\7\1\x\g\z\o\z\9\d\l\c\y\f\8\m\x\s\q\d\2\8\c\9\2\o\0\9\w\s\i\g\v\9\l\w\9\w\m\z\i\m\o\2\4\e\i\i\3\e\5\x\f\1\o\a\y\s\w\9\a\i\g\q\p\z\z\x\g\7\d\4\y\t\c\e\7\9\u\t\1\8\t\r\k\8\h\i\q\9\w\c\f\g\x\4\8\p\o\3\r\d\3\9\i\0\4\m\d\x\1\3\y\l\s\5\q\o\6\i\5\j\j\w\w\w\s\0\9\r\h\9\e\u\r\9\l\2\z\e\s\t\q\5\0\0\8\n\9\6\p\f\1\u\s\d\n\7\x\c\1\o\d\u\z\n\e\n\n\o\3\2\2\f\g\6\g\1\2\h\i\z\8\7\5\q\m\2\5\t\p\r\f\d\c\e\h\4\l\v\8\j\q\s\q\6\m ]] 00:07:04.779 ************************************ 00:07:04.779 END TEST dd_rw_offset 00:07:04.779 ************************************ 00:07:04.779 00:07:04.779 real 0m1.399s 00:07:04.779 user 0m0.972s 00:07:04.779 sys 0m0.624s 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.779 18:57:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.779 { 00:07:04.779 "subsystems": [ 00:07:04.779 { 00:07:04.779 "subsystem": "bdev", 00:07:04.779 "config": [ 00:07:04.779 { 00:07:04.779 "params": { 00:07:04.779 "trtype": "pcie", 00:07:04.779 "traddr": "0000:00:10.0", 00:07:04.779 "name": "Nvme0" 00:07:04.779 }, 00:07:04.779 "method": "bdev_nvme_attach_controller" 00:07:04.779 }, 00:07:04.779 { 00:07:04.779 "method": "bdev_wait_for_examine" 00:07:04.779 } 00:07:04.779 ] 00:07:04.779 } 00:07:04.779 ] 00:07:04.779 } 00:07:04.779 [2024-07-15 18:57:32.042333] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:04.779 [2024-07-15 18:57:32.042482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63008 ] 00:07:05.039 [2024-07-15 18:57:32.186398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.039 [2024-07-15 18:57:32.294351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.298 [2024-07-15 18:57:32.351167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.557  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.557 00:07:05.557 18:57:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.557 00:07:05.557 real 0m19.597s 00:07:05.557 user 0m14.197s 00:07:05.557 sys 0m7.183s 00:07:05.557 18:57:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.557 18:57:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.557 ************************************ 00:07:05.557 END TEST spdk_dd_basic_rw 00:07:05.557 ************************************ 00:07:05.557 18:57:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:05.557 18:57:32 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.557 18:57:32 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.557 18:57:32 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.557 18:57:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.557 ************************************ 00:07:05.557 START TEST spdk_dd_posix 00:07:05.557 ************************************ 00:07:05.557 18:57:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:05.815 * Looking for test storage... 00:07:05.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:05.815 * First test run, liburing in use 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.815 ************************************ 00:07:05.815 START TEST dd_flag_append 00:07:05.815 ************************************ 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=auxka09h624yz3wsxuq5nomzsbn7p96x 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=x5r475bnna8udx3h7xyxfzzz2a6hpch2 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s auxka09h624yz3wsxuq5nomzsbn7p96x 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s x5r475bnna8udx3h7xyxfzzz2a6hpch2 00:07:05.815 18:57:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:05.815 [2024-07-15 18:57:32.945624] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:05.815 [2024-07-15 18:57:32.945725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63066 ] 00:07:05.815 [2024-07-15 18:57:33.088003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.073 [2024-07-15 18:57:33.205804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.073 [2024-07-15 18:57:33.268012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.331  Copying: 32/32 [B] (average 31 kBps) 00:07:06.331 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ x5r475bnna8udx3h7xyxfzzz2a6hpch2auxka09h624yz3wsxuq5nomzsbn7p96x == \x\5\r\4\7\5\b\n\n\a\8\u\d\x\3\h\7\x\y\x\f\z\z\z\2\a\6\h\p\c\h\2\a\u\x\k\a\0\9\h\6\2\4\y\z\3\w\s\x\u\q\5\n\o\m\z\s\b\n\7\p\9\6\x ]] 00:07:06.331 00:07:06.331 real 0m0.644s 00:07:06.331 user 0m0.363s 00:07:06.331 sys 0m0.307s 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.331 ************************************ 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:06.331 END TEST dd_flag_append 00:07:06.331 ************************************ 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.331 ************************************ 00:07:06.331 START TEST dd_flag_directory 00:07:06.331 ************************************ 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.331 18:57:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.588 [2024-07-15 18:57:33.631841] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:06.588 [2024-07-15 18:57:33.631987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63100 ] 00:07:06.588 [2024-07-15 18:57:33.765975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.589 [2024-07-15 18:57:33.877660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.847 [2024-07-15 18:57:33.936811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.847 [2024-07-15 18:57:33.972526] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.847 [2024-07-15 18:57:33.972608] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.847 [2024-07-15 18:57:33.972623] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.847 [2024-07-15 18:57:34.094472] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.105 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.106 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.106 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.106 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.106 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.106 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:07.106 [2024-07-15 18:57:34.264934] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:07.106 [2024-07-15 18:57:34.265040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63110 ] 00:07:07.365 [2024-07-15 18:57:34.402761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.365 [2024-07-15 18:57:34.521145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.365 [2024-07-15 18:57:34.578793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.365 [2024-07-15 18:57:34.613937] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.365 [2024-07-15 18:57:34.614020] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:07.365 [2024-07-15 18:57:34.614035] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.623 [2024-07-15 18:57:34.736350] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.623 00:07:07.623 real 0m1.261s 00:07:07.623 user 0m0.730s 00:07:07.623 sys 0m0.319s 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 END TEST dd_flag_directory 00:07:07.623 ************************************ 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.623 ************************************ 00:07:07.623 START TEST dd_flag_nofollow 00:07:07.623 ************************************ 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.623 18:57:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.881 [2024-07-15 18:57:34.964039] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:07.881 [2024-07-15 18:57:34.964136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63138 ] 00:07:07.881 [2024-07-15 18:57:35.104070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.141 [2024-07-15 18:57:35.223408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.141 [2024-07-15 18:57:35.282179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.141 [2024-07-15 18:57:35.317874] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.141 [2024-07-15 18:57:35.317958] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:08.141 [2024-07-15 18:57:35.317989] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.417 [2024-07-15 18:57:35.438669] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.417 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:08.417 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.418 18:57:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:08.418 [2024-07-15 18:57:35.610781] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:08.418 [2024-07-15 18:57:35.610877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:07:08.676 [2024-07-15 18:57:35.750743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.676 [2024-07-15 18:57:35.865530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.676 [2024-07-15 18:57:35.923731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.676 [2024-07-15 18:57:35.956912] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.676 [2024-07-15 18:57:35.956986] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.676 [2024-07-15 18:57:35.957017] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.934 [2024-07-15 18:57:36.076802] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:08.934 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.193 [2024-07-15 18:57:36.242801] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:09.193 [2024-07-15 18:57:36.242906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:07:09.193 [2024-07-15 18:57:36.382677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.453 [2024-07-15 18:57:36.500644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.453 [2024-07-15 18:57:36.558808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.712  Copying: 512/512 [B] (average 500 kBps) 00:07:09.712 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 6caknfnipgdcfjg9sb7h6hexvpmqyuv1un0m06jqtearums0u65hkpfm3qyeaxbo3ccrg9htv94gazb7dhztsvtb0jyvuzd9h2gpigw0oln6i18f7mbf4786hn5fup639017jgl3bnowcvdbihdj42xw45mn6l9pj6mw2lisy2tdilundy7vyqlurmvn1nnag1zndko7ae1eeltik8uux62qzn6xm7y7d1ypll8a8vgvx9p9genjqgez4s2wfrkdhgl9w7g932ak5o8qugvofz95u0q2bcvhg3g5rr64nyw2hbhblag9jvb9ch2mw0ixyhm82wzrhbd5qu03vweslhgoxzbls6wbr1yucmtgnu9i7cqf70cm4v76x97mg6mh0x8cz0awol2icdt3p24fwe45c4u5356w1m1zoa1gcm26s5e0yaliwxqe3pf9ou107bh6pg17jbm0al25jd4pa39hdhcidpe2osgeegd7u4uvo0hvg6dl3a384ujdbsfr == \6\c\a\k\n\f\n\i\p\g\d\c\f\j\g\9\s\b\7\h\6\h\e\x\v\p\m\q\y\u\v\1\u\n\0\m\0\6\j\q\t\e\a\r\u\m\s\0\u\6\5\h\k\p\f\m\3\q\y\e\a\x\b\o\3\c\c\r\g\9\h\t\v\9\4\g\a\z\b\7\d\h\z\t\s\v\t\b\0\j\y\v\u\z\d\9\h\2\g\p\i\g\w\0\o\l\n\6\i\1\8\f\7\m\b\f\4\7\8\6\h\n\5\f\u\p\6\3\9\0\1\7\j\g\l\3\b\n\o\w\c\v\d\b\i\h\d\j\4\2\x\w\4\5\m\n\6\l\9\p\j\6\m\w\2\l\i\s\y\2\t\d\i\l\u\n\d\y\7\v\y\q\l\u\r\m\v\n\1\n\n\a\g\1\z\n\d\k\o\7\a\e\1\e\e\l\t\i\k\8\u\u\x\6\2\q\z\n\6\x\m\7\y\7\d\1\y\p\l\l\8\a\8\v\g\v\x\9\p\9\g\e\n\j\q\g\e\z\4\s\2\w\f\r\k\d\h\g\l\9\w\7\g\9\3\2\a\k\5\o\8\q\u\g\v\o\f\z\9\5\u\0\q\2\b\c\v\h\g\3\g\5\r\r\6\4\n\y\w\2\h\b\h\b\l\a\g\9\j\v\b\9\c\h\2\m\w\0\i\x\y\h\m\8\2\w\z\r\h\b\d\5\q\u\0\3\v\w\e\s\l\h\g\o\x\z\b\l\s\6\w\b\r\1\y\u\c\m\t\g\n\u\9\i\7\c\q\f\7\0\c\m\4\v\7\6\x\9\7\m\g\6\m\h\0\x\8\c\z\0\a\w\o\l\2\i\c\d\t\3\p\2\4\f\w\e\4\5\c\4\u\5\3\5\6\w\1\m\1\z\o\a\1\g\c\m\2\6\s\5\e\0\y\a\l\i\w\x\q\e\3\p\f\9\o\u\1\0\7\b\h\6\p\g\1\7\j\b\m\0\a\l\2\5\j\d\4\p\a\3\9\h\d\h\c\i\d\p\e\2\o\s\g\e\e\g\d\7\u\4\u\v\o\0\h\v\g\6\d\l\3\a\3\8\4\u\j\d\b\s\f\r ]] 00:07:09.712 00:07:09.712 real 0m1.933s 00:07:09.712 user 0m1.124s 00:07:09.712 sys 0m0.630s 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:09.712 ************************************ 00:07:09.712 END TEST dd_flag_nofollow 00:07:09.712 ************************************ 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.712 ************************************ 00:07:09.712 START TEST dd_flag_noatime 00:07:09.712 ************************************ 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721069856 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721069856 00:07:09.712 18:57:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:10.647 18:57:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.905 [2024-07-15 18:57:37.961158] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:10.905 [2024-07-15 18:57:37.961274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:07:10.905 [2024-07-15 18:57:38.102015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.163 [2024-07-15 18:57:38.225771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.163 [2024-07-15 18:57:38.283493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.421  Copying: 512/512 [B] (average 500 kBps) 00:07:11.421 00:07:11.421 18:57:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.421 18:57:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721069856 )) 00:07:11.421 18:57:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.421 18:57:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721069856 )) 00:07:11.421 18:57:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.421 [2024-07-15 18:57:38.599399] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:11.421 [2024-07-15 18:57:38.599535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63217 ] 00:07:11.678 [2024-07-15 18:57:38.738473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.678 [2024-07-15 18:57:38.853318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.678 [2024-07-15 18:57:38.909409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.934  Copying: 512/512 [B] (average 500 kBps) 00:07:11.934 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721069858 )) 00:07:11.934 00:07:11.934 real 0m2.291s 00:07:11.934 user 0m0.733s 00:07:11.934 sys 0m0.600s 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.934 ************************************ 00:07:11.934 END TEST dd_flag_noatime 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:11.934 ************************************ 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.934 18:57:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:12.193 ************************************ 00:07:12.193 START TEST dd_flags_misc 00:07:12.193 ************************************ 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.193 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.193 [2024-07-15 18:57:39.305004] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:12.193 [2024-07-15 18:57:39.305158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63251 ] 00:07:12.193 [2024-07-15 18:57:39.448751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.450 [2024-07-15 18:57:39.570801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.450 [2024-07-15 18:57:39.623932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.708  Copying: 512/512 [B] (average 500 kBps) 00:07:12.708 00:07:12.708 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ugvmkgd2vm9808ugboassmenitccrawdbegyv8trx0z1ngjyqpobs1f4lnm62jk7dqrpaqigsf60co3ozknynoerudtvec1vmbz0rpq9ld43p99dd0fwrnas1l391102dgde423w2fipz8fizhtc7kpre1xve4rbfrqibn30e28p1iadw8lhgsbqc42qh1wsl3xuqbymgcbz6nb41jtteyaay8vllo0ofqv06lww2dd8lh4oahxexmwei9npjbypkieq07zdetqrmxeby17wvn9j137imlw3y34haeg0qmdml8ggaa58iojzjcclyt644ttlck78fb8az4fzbrbk3jhyhgaojzz7g5ymye54ju0q48n3xn1feszj8kwyv6i8nh1w8pjoh7vm1fqjy9zqnm7blfbz5hvcdcpdogv3r2u9cs63x4s9wdq45lewqr58h4kspzou0wzhaiyvde7knqjgmvv2ec1lvzxq52ied795tq4yyzsiy99p0o5bxutw == \u\g\v\m\k\g\d\2\v\m\9\8\0\8\u\g\b\o\a\s\s\m\e\n\i\t\c\c\r\a\w\d\b\e\g\y\v\8\t\r\x\0\z\1\n\g\j\y\q\p\o\b\s\1\f\4\l\n\m\6\2\j\k\7\d\q\r\p\a\q\i\g\s\f\6\0\c\o\3\o\z\k\n\y\n\o\e\r\u\d\t\v\e\c\1\v\m\b\z\0\r\p\q\9\l\d\4\3\p\9\9\d\d\0\f\w\r\n\a\s\1\l\3\9\1\1\0\2\d\g\d\e\4\2\3\w\2\f\i\p\z\8\f\i\z\h\t\c\7\k\p\r\e\1\x\v\e\4\r\b\f\r\q\i\b\n\3\0\e\2\8\p\1\i\a\d\w\8\l\h\g\s\b\q\c\4\2\q\h\1\w\s\l\3\x\u\q\b\y\m\g\c\b\z\6\n\b\4\1\j\t\t\e\y\a\a\y\8\v\l\l\o\0\o\f\q\v\0\6\l\w\w\2\d\d\8\l\h\4\o\a\h\x\e\x\m\w\e\i\9\n\p\j\b\y\p\k\i\e\q\0\7\z\d\e\t\q\r\m\x\e\b\y\1\7\w\v\n\9\j\1\3\7\i\m\l\w\3\y\3\4\h\a\e\g\0\q\m\d\m\l\8\g\g\a\a\5\8\i\o\j\z\j\c\c\l\y\t\6\4\4\t\t\l\c\k\7\8\f\b\8\a\z\4\f\z\b\r\b\k\3\j\h\y\h\g\a\o\j\z\z\7\g\5\y\m\y\e\5\4\j\u\0\q\4\8\n\3\x\n\1\f\e\s\z\j\8\k\w\y\v\6\i\8\n\h\1\w\8\p\j\o\h\7\v\m\1\f\q\j\y\9\z\q\n\m\7\b\l\f\b\z\5\h\v\c\d\c\p\d\o\g\v\3\r\2\u\9\c\s\6\3\x\4\s\9\w\d\q\4\5\l\e\w\q\r\5\8\h\4\k\s\p\z\o\u\0\w\z\h\a\i\y\v\d\e\7\k\n\q\j\g\m\v\v\2\e\c\1\l\v\z\x\q\5\2\i\e\d\7\9\5\t\q\4\y\y\z\s\i\y\9\9\p\0\o\5\b\x\u\t\w ]] 00:07:12.708 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.709 18:57:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.709 [2024-07-15 18:57:39.936169] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:12.709 [2024-07-15 18:57:39.936243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63260 ] 00:07:12.969 [2024-07-15 18:57:40.067532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.969 [2024-07-15 18:57:40.184307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.969 [2024-07-15 18:57:40.240978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.228  Copying: 512/512 [B] (average 500 kBps) 00:07:13.228 00:07:13.228 18:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ugvmkgd2vm9808ugboassmenitccrawdbegyv8trx0z1ngjyqpobs1f4lnm62jk7dqrpaqigsf60co3ozknynoerudtvec1vmbz0rpq9ld43p99dd0fwrnas1l391102dgde423w2fipz8fizhtc7kpre1xve4rbfrqibn30e28p1iadw8lhgsbqc42qh1wsl3xuqbymgcbz6nb41jtteyaay8vllo0ofqv06lww2dd8lh4oahxexmwei9npjbypkieq07zdetqrmxeby17wvn9j137imlw3y34haeg0qmdml8ggaa58iojzjcclyt644ttlck78fb8az4fzbrbk3jhyhgaojzz7g5ymye54ju0q48n3xn1feszj8kwyv6i8nh1w8pjoh7vm1fqjy9zqnm7blfbz5hvcdcpdogv3r2u9cs63x4s9wdq45lewqr58h4kspzou0wzhaiyvde7knqjgmvv2ec1lvzxq52ied795tq4yyzsiy99p0o5bxutw == \u\g\v\m\k\g\d\2\v\m\9\8\0\8\u\g\b\o\a\s\s\m\e\n\i\t\c\c\r\a\w\d\b\e\g\y\v\8\t\r\x\0\z\1\n\g\j\y\q\p\o\b\s\1\f\4\l\n\m\6\2\j\k\7\d\q\r\p\a\q\i\g\s\f\6\0\c\o\3\o\z\k\n\y\n\o\e\r\u\d\t\v\e\c\1\v\m\b\z\0\r\p\q\9\l\d\4\3\p\9\9\d\d\0\f\w\r\n\a\s\1\l\3\9\1\1\0\2\d\g\d\e\4\2\3\w\2\f\i\p\z\8\f\i\z\h\t\c\7\k\p\r\e\1\x\v\e\4\r\b\f\r\q\i\b\n\3\0\e\2\8\p\1\i\a\d\w\8\l\h\g\s\b\q\c\4\2\q\h\1\w\s\l\3\x\u\q\b\y\m\g\c\b\z\6\n\b\4\1\j\t\t\e\y\a\a\y\8\v\l\l\o\0\o\f\q\v\0\6\l\w\w\2\d\d\8\l\h\4\o\a\h\x\e\x\m\w\e\i\9\n\p\j\b\y\p\k\i\e\q\0\7\z\d\e\t\q\r\m\x\e\b\y\1\7\w\v\n\9\j\1\3\7\i\m\l\w\3\y\3\4\h\a\e\g\0\q\m\d\m\l\8\g\g\a\a\5\8\i\o\j\z\j\c\c\l\y\t\6\4\4\t\t\l\c\k\7\8\f\b\8\a\z\4\f\z\b\r\b\k\3\j\h\y\h\g\a\o\j\z\z\7\g\5\y\m\y\e\5\4\j\u\0\q\4\8\n\3\x\n\1\f\e\s\z\j\8\k\w\y\v\6\i\8\n\h\1\w\8\p\j\o\h\7\v\m\1\f\q\j\y\9\z\q\n\m\7\b\l\f\b\z\5\h\v\c\d\c\p\d\o\g\v\3\r\2\u\9\c\s\6\3\x\4\s\9\w\d\q\4\5\l\e\w\q\r\5\8\h\4\k\s\p\z\o\u\0\w\z\h\a\i\y\v\d\e\7\k\n\q\j\g\m\v\v\2\e\c\1\l\v\z\x\q\5\2\i\e\d\7\9\5\t\q\4\y\y\z\s\i\y\9\9\p\0\o\5\b\x\u\t\w ]] 00:07:13.228 18:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.228 18:57:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.486 [2024-07-15 18:57:40.559779] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:13.486 [2024-07-15 18:57:40.559913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63270 ] 00:07:13.486 [2024-07-15 18:57:40.698476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.744 [2024-07-15 18:57:40.813044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.744 [2024-07-15 18:57:40.868025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.003  Copying: 512/512 [B] (average 83 kBps) 00:07:14.003 00:07:14.003 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ugvmkgd2vm9808ugboassmenitccrawdbegyv8trx0z1ngjyqpobs1f4lnm62jk7dqrpaqigsf60co3ozknynoerudtvec1vmbz0rpq9ld43p99dd0fwrnas1l391102dgde423w2fipz8fizhtc7kpre1xve4rbfrqibn30e28p1iadw8lhgsbqc42qh1wsl3xuqbymgcbz6nb41jtteyaay8vllo0ofqv06lww2dd8lh4oahxexmwei9npjbypkieq07zdetqrmxeby17wvn9j137imlw3y34haeg0qmdml8ggaa58iojzjcclyt644ttlck78fb8az4fzbrbk3jhyhgaojzz7g5ymye54ju0q48n3xn1feszj8kwyv6i8nh1w8pjoh7vm1fqjy9zqnm7blfbz5hvcdcpdogv3r2u9cs63x4s9wdq45lewqr58h4kspzou0wzhaiyvde7knqjgmvv2ec1lvzxq52ied795tq4yyzsiy99p0o5bxutw == \u\g\v\m\k\g\d\2\v\m\9\8\0\8\u\g\b\o\a\s\s\m\e\n\i\t\c\c\r\a\w\d\b\e\g\y\v\8\t\r\x\0\z\1\n\g\j\y\q\p\o\b\s\1\f\4\l\n\m\6\2\j\k\7\d\q\r\p\a\q\i\g\s\f\6\0\c\o\3\o\z\k\n\y\n\o\e\r\u\d\t\v\e\c\1\v\m\b\z\0\r\p\q\9\l\d\4\3\p\9\9\d\d\0\f\w\r\n\a\s\1\l\3\9\1\1\0\2\d\g\d\e\4\2\3\w\2\f\i\p\z\8\f\i\z\h\t\c\7\k\p\r\e\1\x\v\e\4\r\b\f\r\q\i\b\n\3\0\e\2\8\p\1\i\a\d\w\8\l\h\g\s\b\q\c\4\2\q\h\1\w\s\l\3\x\u\q\b\y\m\g\c\b\z\6\n\b\4\1\j\t\t\e\y\a\a\y\8\v\l\l\o\0\o\f\q\v\0\6\l\w\w\2\d\d\8\l\h\4\o\a\h\x\e\x\m\w\e\i\9\n\p\j\b\y\p\k\i\e\q\0\7\z\d\e\t\q\r\m\x\e\b\y\1\7\w\v\n\9\j\1\3\7\i\m\l\w\3\y\3\4\h\a\e\g\0\q\m\d\m\l\8\g\g\a\a\5\8\i\o\j\z\j\c\c\l\y\t\6\4\4\t\t\l\c\k\7\8\f\b\8\a\z\4\f\z\b\r\b\k\3\j\h\y\h\g\a\o\j\z\z\7\g\5\y\m\y\e\5\4\j\u\0\q\4\8\n\3\x\n\1\f\e\s\z\j\8\k\w\y\v\6\i\8\n\h\1\w\8\p\j\o\h\7\v\m\1\f\q\j\y\9\z\q\n\m\7\b\l\f\b\z\5\h\v\c\d\c\p\d\o\g\v\3\r\2\u\9\c\s\6\3\x\4\s\9\w\d\q\4\5\l\e\w\q\r\5\8\h\4\k\s\p\z\o\u\0\w\z\h\a\i\y\v\d\e\7\k\n\q\j\g\m\v\v\2\e\c\1\l\v\z\x\q\5\2\i\e\d\7\9\5\t\q\4\y\y\z\s\i\y\9\9\p\0\o\5\b\x\u\t\w ]] 00:07:14.003 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.003 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.003 [2024-07-15 18:57:41.181165] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:14.003 [2024-07-15 18:57:41.181269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63279 ] 00:07:14.261 [2024-07-15 18:57:41.319915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.261 [2024-07-15 18:57:41.434629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.261 [2024-07-15 18:57:41.489598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.520  Copying: 512/512 [B] (average 250 kBps) 00:07:14.520 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ugvmkgd2vm9808ugboassmenitccrawdbegyv8trx0z1ngjyqpobs1f4lnm62jk7dqrpaqigsf60co3ozknynoerudtvec1vmbz0rpq9ld43p99dd0fwrnas1l391102dgde423w2fipz8fizhtc7kpre1xve4rbfrqibn30e28p1iadw8lhgsbqc42qh1wsl3xuqbymgcbz6nb41jtteyaay8vllo0ofqv06lww2dd8lh4oahxexmwei9npjbypkieq07zdetqrmxeby17wvn9j137imlw3y34haeg0qmdml8ggaa58iojzjcclyt644ttlck78fb8az4fzbrbk3jhyhgaojzz7g5ymye54ju0q48n3xn1feszj8kwyv6i8nh1w8pjoh7vm1fqjy9zqnm7blfbz5hvcdcpdogv3r2u9cs63x4s9wdq45lewqr58h4kspzou0wzhaiyvde7knqjgmvv2ec1lvzxq52ied795tq4yyzsiy99p0o5bxutw == \u\g\v\m\k\g\d\2\v\m\9\8\0\8\u\g\b\o\a\s\s\m\e\n\i\t\c\c\r\a\w\d\b\e\g\y\v\8\t\r\x\0\z\1\n\g\j\y\q\p\o\b\s\1\f\4\l\n\m\6\2\j\k\7\d\q\r\p\a\q\i\g\s\f\6\0\c\o\3\o\z\k\n\y\n\o\e\r\u\d\t\v\e\c\1\v\m\b\z\0\r\p\q\9\l\d\4\3\p\9\9\d\d\0\f\w\r\n\a\s\1\l\3\9\1\1\0\2\d\g\d\e\4\2\3\w\2\f\i\p\z\8\f\i\z\h\t\c\7\k\p\r\e\1\x\v\e\4\r\b\f\r\q\i\b\n\3\0\e\2\8\p\1\i\a\d\w\8\l\h\g\s\b\q\c\4\2\q\h\1\w\s\l\3\x\u\q\b\y\m\g\c\b\z\6\n\b\4\1\j\t\t\e\y\a\a\y\8\v\l\l\o\0\o\f\q\v\0\6\l\w\w\2\d\d\8\l\h\4\o\a\h\x\e\x\m\w\e\i\9\n\p\j\b\y\p\k\i\e\q\0\7\z\d\e\t\q\r\m\x\e\b\y\1\7\w\v\n\9\j\1\3\7\i\m\l\w\3\y\3\4\h\a\e\g\0\q\m\d\m\l\8\g\g\a\a\5\8\i\o\j\z\j\c\c\l\y\t\6\4\4\t\t\l\c\k\7\8\f\b\8\a\z\4\f\z\b\r\b\k\3\j\h\y\h\g\a\o\j\z\z\7\g\5\y\m\y\e\5\4\j\u\0\q\4\8\n\3\x\n\1\f\e\s\z\j\8\k\w\y\v\6\i\8\n\h\1\w\8\p\j\o\h\7\v\m\1\f\q\j\y\9\z\q\n\m\7\b\l\f\b\z\5\h\v\c\d\c\p\d\o\g\v\3\r\2\u\9\c\s\6\3\x\4\s\9\w\d\q\4\5\l\e\w\q\r\5\8\h\4\k\s\p\z\o\u\0\w\z\h\a\i\y\v\d\e\7\k\n\q\j\g\m\v\v\2\e\c\1\l\v\z\x\q\5\2\i\e\d\7\9\5\t\q\4\y\y\z\s\i\y\9\9\p\0\o\5\b\x\u\t\w ]] 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.520 18:57:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:14.520 [2024-07-15 18:57:41.798447] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:14.520 [2024-07-15 18:57:41.798584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63289 ] 00:07:14.779 [2024-07-15 18:57:41.933796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.779 [2024-07-15 18:57:42.044662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.037 [2024-07-15 18:57:42.100481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.297  Copying: 512/512 [B] (average 500 kBps) 00:07:15.297 00:07:15.297 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ik2aamdmla7g3tz7o3dyvo43i7qdqt3kdn83rbmk10gnx17hw0m4jrppyoiul4i4zmgnyraf9ym9nft1umjvqg5jdgbvp3ko7mjn9lzm3b8brykvnlgqf4kia9354qzwfbmikpb4bpa1j6x3jpdsgklg6dvmrxuu23ad0wu4vthzsu4cah586e4uvsy1mymdnkhdhb7pkx8f1uf5r7xdjv5ymrtbuo2meoy2z7z5v04vrb29f32byolf862iuuey1bjil1p6nzqal94qhlvo0px17mca7lilg9mer5hvh911m1fmklvefq272ofugywzsz3dph30dwxx29h2vjmvdx3x8pqgzp00rfton5qr8ieeao8q886vnmr45et5rl2fnnnwcihgxd9y0me7rdmt01lhuozkyfix1kyf9833r72vdpbrt9jhokl913q7b46ipk4aua7019p5ffj1hb9kalbi1inkjzysnx8fgozitz7x7rq9yc2w11jspfnwunlt == \i\k\2\a\a\m\d\m\l\a\7\g\3\t\z\7\o\3\d\y\v\o\4\3\i\7\q\d\q\t\3\k\d\n\8\3\r\b\m\k\1\0\g\n\x\1\7\h\w\0\m\4\j\r\p\p\y\o\i\u\l\4\i\4\z\m\g\n\y\r\a\f\9\y\m\9\n\f\t\1\u\m\j\v\q\g\5\j\d\g\b\v\p\3\k\o\7\m\j\n\9\l\z\m\3\b\8\b\r\y\k\v\n\l\g\q\f\4\k\i\a\9\3\5\4\q\z\w\f\b\m\i\k\p\b\4\b\p\a\1\j\6\x\3\j\p\d\s\g\k\l\g\6\d\v\m\r\x\u\u\2\3\a\d\0\w\u\4\v\t\h\z\s\u\4\c\a\h\5\8\6\e\4\u\v\s\y\1\m\y\m\d\n\k\h\d\h\b\7\p\k\x\8\f\1\u\f\5\r\7\x\d\j\v\5\y\m\r\t\b\u\o\2\m\e\o\y\2\z\7\z\5\v\0\4\v\r\b\2\9\f\3\2\b\y\o\l\f\8\6\2\i\u\u\e\y\1\b\j\i\l\1\p\6\n\z\q\a\l\9\4\q\h\l\v\o\0\p\x\1\7\m\c\a\7\l\i\l\g\9\m\e\r\5\h\v\h\9\1\1\m\1\f\m\k\l\v\e\f\q\2\7\2\o\f\u\g\y\w\z\s\z\3\d\p\h\3\0\d\w\x\x\2\9\h\2\v\j\m\v\d\x\3\x\8\p\q\g\z\p\0\0\r\f\t\o\n\5\q\r\8\i\e\e\a\o\8\q\8\8\6\v\n\m\r\4\5\e\t\5\r\l\2\f\n\n\n\w\c\i\h\g\x\d\9\y\0\m\e\7\r\d\m\t\0\1\l\h\u\o\z\k\y\f\i\x\1\k\y\f\9\8\3\3\r\7\2\v\d\p\b\r\t\9\j\h\o\k\l\9\1\3\q\7\b\4\6\i\p\k\4\a\u\a\7\0\1\9\p\5\f\f\j\1\h\b\9\k\a\l\b\i\1\i\n\k\j\z\y\s\n\x\8\f\g\o\z\i\t\z\7\x\7\r\q\9\y\c\2\w\1\1\j\s\p\f\n\w\u\n\l\t ]] 00:07:15.297 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.297 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:15.297 [2024-07-15 18:57:42.383925] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:15.297 [2024-07-15 18:57:42.384007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63304 ] 00:07:15.297 [2024-07-15 18:57:42.515342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.555 [2024-07-15 18:57:42.626299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.555 [2024-07-15 18:57:42.682981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.814  Copying: 512/512 [B] (average 500 kBps) 00:07:15.814 00:07:15.814 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ik2aamdmla7g3tz7o3dyvo43i7qdqt3kdn83rbmk10gnx17hw0m4jrppyoiul4i4zmgnyraf9ym9nft1umjvqg5jdgbvp3ko7mjn9lzm3b8brykvnlgqf4kia9354qzwfbmikpb4bpa1j6x3jpdsgklg6dvmrxuu23ad0wu4vthzsu4cah586e4uvsy1mymdnkhdhb7pkx8f1uf5r7xdjv5ymrtbuo2meoy2z7z5v04vrb29f32byolf862iuuey1bjil1p6nzqal94qhlvo0px17mca7lilg9mer5hvh911m1fmklvefq272ofugywzsz3dph30dwxx29h2vjmvdx3x8pqgzp00rfton5qr8ieeao8q886vnmr45et5rl2fnnnwcihgxd9y0me7rdmt01lhuozkyfix1kyf9833r72vdpbrt9jhokl913q7b46ipk4aua7019p5ffj1hb9kalbi1inkjzysnx8fgozitz7x7rq9yc2w11jspfnwunlt == \i\k\2\a\a\m\d\m\l\a\7\g\3\t\z\7\o\3\d\y\v\o\4\3\i\7\q\d\q\t\3\k\d\n\8\3\r\b\m\k\1\0\g\n\x\1\7\h\w\0\m\4\j\r\p\p\y\o\i\u\l\4\i\4\z\m\g\n\y\r\a\f\9\y\m\9\n\f\t\1\u\m\j\v\q\g\5\j\d\g\b\v\p\3\k\o\7\m\j\n\9\l\z\m\3\b\8\b\r\y\k\v\n\l\g\q\f\4\k\i\a\9\3\5\4\q\z\w\f\b\m\i\k\p\b\4\b\p\a\1\j\6\x\3\j\p\d\s\g\k\l\g\6\d\v\m\r\x\u\u\2\3\a\d\0\w\u\4\v\t\h\z\s\u\4\c\a\h\5\8\6\e\4\u\v\s\y\1\m\y\m\d\n\k\h\d\h\b\7\p\k\x\8\f\1\u\f\5\r\7\x\d\j\v\5\y\m\r\t\b\u\o\2\m\e\o\y\2\z\7\z\5\v\0\4\v\r\b\2\9\f\3\2\b\y\o\l\f\8\6\2\i\u\u\e\y\1\b\j\i\l\1\p\6\n\z\q\a\l\9\4\q\h\l\v\o\0\p\x\1\7\m\c\a\7\l\i\l\g\9\m\e\r\5\h\v\h\9\1\1\m\1\f\m\k\l\v\e\f\q\2\7\2\o\f\u\g\y\w\z\s\z\3\d\p\h\3\0\d\w\x\x\2\9\h\2\v\j\m\v\d\x\3\x\8\p\q\g\z\p\0\0\r\f\t\o\n\5\q\r\8\i\e\e\a\o\8\q\8\8\6\v\n\m\r\4\5\e\t\5\r\l\2\f\n\n\n\w\c\i\h\g\x\d\9\y\0\m\e\7\r\d\m\t\0\1\l\h\u\o\z\k\y\f\i\x\1\k\y\f\9\8\3\3\r\7\2\v\d\p\b\r\t\9\j\h\o\k\l\9\1\3\q\7\b\4\6\i\p\k\4\a\u\a\7\0\1\9\p\5\f\f\j\1\h\b\9\k\a\l\b\i\1\i\n\k\j\z\y\s\n\x\8\f\g\o\z\i\t\z\7\x\7\r\q\9\y\c\2\w\1\1\j\s\p\f\n\w\u\n\l\t ]] 00:07:15.814 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.814 18:57:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:15.814 [2024-07-15 18:57:42.967859] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:15.814 [2024-07-15 18:57:42.967991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63308 ] 00:07:15.814 [2024-07-15 18:57:43.101156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.073 [2024-07-15 18:57:43.213914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.073 [2024-07-15 18:57:43.270462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.331  Copying: 512/512 [B] (average 166 kBps) 00:07:16.331 00:07:16.331 18:57:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ik2aamdmla7g3tz7o3dyvo43i7qdqt3kdn83rbmk10gnx17hw0m4jrppyoiul4i4zmgnyraf9ym9nft1umjvqg5jdgbvp3ko7mjn9lzm3b8brykvnlgqf4kia9354qzwfbmikpb4bpa1j6x3jpdsgklg6dvmrxuu23ad0wu4vthzsu4cah586e4uvsy1mymdnkhdhb7pkx8f1uf5r7xdjv5ymrtbuo2meoy2z7z5v04vrb29f32byolf862iuuey1bjil1p6nzqal94qhlvo0px17mca7lilg9mer5hvh911m1fmklvefq272ofugywzsz3dph30dwxx29h2vjmvdx3x8pqgzp00rfton5qr8ieeao8q886vnmr45et5rl2fnnnwcihgxd9y0me7rdmt01lhuozkyfix1kyf9833r72vdpbrt9jhokl913q7b46ipk4aua7019p5ffj1hb9kalbi1inkjzysnx8fgozitz7x7rq9yc2w11jspfnwunlt == \i\k\2\a\a\m\d\m\l\a\7\g\3\t\z\7\o\3\d\y\v\o\4\3\i\7\q\d\q\t\3\k\d\n\8\3\r\b\m\k\1\0\g\n\x\1\7\h\w\0\m\4\j\r\p\p\y\o\i\u\l\4\i\4\z\m\g\n\y\r\a\f\9\y\m\9\n\f\t\1\u\m\j\v\q\g\5\j\d\g\b\v\p\3\k\o\7\m\j\n\9\l\z\m\3\b\8\b\r\y\k\v\n\l\g\q\f\4\k\i\a\9\3\5\4\q\z\w\f\b\m\i\k\p\b\4\b\p\a\1\j\6\x\3\j\p\d\s\g\k\l\g\6\d\v\m\r\x\u\u\2\3\a\d\0\w\u\4\v\t\h\z\s\u\4\c\a\h\5\8\6\e\4\u\v\s\y\1\m\y\m\d\n\k\h\d\h\b\7\p\k\x\8\f\1\u\f\5\r\7\x\d\j\v\5\y\m\r\t\b\u\o\2\m\e\o\y\2\z\7\z\5\v\0\4\v\r\b\2\9\f\3\2\b\y\o\l\f\8\6\2\i\u\u\e\y\1\b\j\i\l\1\p\6\n\z\q\a\l\9\4\q\h\l\v\o\0\p\x\1\7\m\c\a\7\l\i\l\g\9\m\e\r\5\h\v\h\9\1\1\m\1\f\m\k\l\v\e\f\q\2\7\2\o\f\u\g\y\w\z\s\z\3\d\p\h\3\0\d\w\x\x\2\9\h\2\v\j\m\v\d\x\3\x\8\p\q\g\z\p\0\0\r\f\t\o\n\5\q\r\8\i\e\e\a\o\8\q\8\8\6\v\n\m\r\4\5\e\t\5\r\l\2\f\n\n\n\w\c\i\h\g\x\d\9\y\0\m\e\7\r\d\m\t\0\1\l\h\u\o\z\k\y\f\i\x\1\k\y\f\9\8\3\3\r\7\2\v\d\p\b\r\t\9\j\h\o\k\l\9\1\3\q\7\b\4\6\i\p\k\4\a\u\a\7\0\1\9\p\5\f\f\j\1\h\b\9\k\a\l\b\i\1\i\n\k\j\z\y\s\n\x\8\f\g\o\z\i\t\z\7\x\7\r\q\9\y\c\2\w\1\1\j\s\p\f\n\w\u\n\l\t ]] 00:07:16.331 18:57:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.331 18:57:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:16.331 [2024-07-15 18:57:43.566967] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:16.331 [2024-07-15 18:57:43.567059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63323 ] 00:07:16.589 [2024-07-15 18:57:43.702642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.589 [2024-07-15 18:57:43.806076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.589 [2024-07-15 18:57:43.862190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.848  Copying: 512/512 [B] (average 250 kBps) 00:07:16.848 00:07:16.848 18:57:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ik2aamdmla7g3tz7o3dyvo43i7qdqt3kdn83rbmk10gnx17hw0m4jrppyoiul4i4zmgnyraf9ym9nft1umjvqg5jdgbvp3ko7mjn9lzm3b8brykvnlgqf4kia9354qzwfbmikpb4bpa1j6x3jpdsgklg6dvmrxuu23ad0wu4vthzsu4cah586e4uvsy1mymdnkhdhb7pkx8f1uf5r7xdjv5ymrtbuo2meoy2z7z5v04vrb29f32byolf862iuuey1bjil1p6nzqal94qhlvo0px17mca7lilg9mer5hvh911m1fmklvefq272ofugywzsz3dph30dwxx29h2vjmvdx3x8pqgzp00rfton5qr8ieeao8q886vnmr45et5rl2fnnnwcihgxd9y0me7rdmt01lhuozkyfix1kyf9833r72vdpbrt9jhokl913q7b46ipk4aua7019p5ffj1hb9kalbi1inkjzysnx8fgozitz7x7rq9yc2w11jspfnwunlt == \i\k\2\a\a\m\d\m\l\a\7\g\3\t\z\7\o\3\d\y\v\o\4\3\i\7\q\d\q\t\3\k\d\n\8\3\r\b\m\k\1\0\g\n\x\1\7\h\w\0\m\4\j\r\p\p\y\o\i\u\l\4\i\4\z\m\g\n\y\r\a\f\9\y\m\9\n\f\t\1\u\m\j\v\q\g\5\j\d\g\b\v\p\3\k\o\7\m\j\n\9\l\z\m\3\b\8\b\r\y\k\v\n\l\g\q\f\4\k\i\a\9\3\5\4\q\z\w\f\b\m\i\k\p\b\4\b\p\a\1\j\6\x\3\j\p\d\s\g\k\l\g\6\d\v\m\r\x\u\u\2\3\a\d\0\w\u\4\v\t\h\z\s\u\4\c\a\h\5\8\6\e\4\u\v\s\y\1\m\y\m\d\n\k\h\d\h\b\7\p\k\x\8\f\1\u\f\5\r\7\x\d\j\v\5\y\m\r\t\b\u\o\2\m\e\o\y\2\z\7\z\5\v\0\4\v\r\b\2\9\f\3\2\b\y\o\l\f\8\6\2\i\u\u\e\y\1\b\j\i\l\1\p\6\n\z\q\a\l\9\4\q\h\l\v\o\0\p\x\1\7\m\c\a\7\l\i\l\g\9\m\e\r\5\h\v\h\9\1\1\m\1\f\m\k\l\v\e\f\q\2\7\2\o\f\u\g\y\w\z\s\z\3\d\p\h\3\0\d\w\x\x\2\9\h\2\v\j\m\v\d\x\3\x\8\p\q\g\z\p\0\0\r\f\t\o\n\5\q\r\8\i\e\e\a\o\8\q\8\8\6\v\n\m\r\4\5\e\t\5\r\l\2\f\n\n\n\w\c\i\h\g\x\d\9\y\0\m\e\7\r\d\m\t\0\1\l\h\u\o\z\k\y\f\i\x\1\k\y\f\9\8\3\3\r\7\2\v\d\p\b\r\t\9\j\h\o\k\l\9\1\3\q\7\b\4\6\i\p\k\4\a\u\a\7\0\1\9\p\5\f\f\j\1\h\b\9\k\a\l\b\i\1\i\n\k\j\z\y\s\n\x\8\f\g\o\z\i\t\z\7\x\7\r\q\9\y\c\2\w\1\1\j\s\p\f\n\w\u\n\l\t ]] 00:07:16.848 00:07:16.848 real 0m4.877s 00:07:16.848 user 0m2.779s 00:07:16.848 sys 0m2.260s 00:07:16.848 18:57:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.848 ************************************ 00:07:16.848 END TEST dd_flags_misc 00:07:16.848 ************************************ 00:07:16.848 18:57:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:17.107 * Second test run, disabling liburing, forcing AIO 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.107 ************************************ 00:07:17.107 START TEST dd_flag_append_forced_aio 00:07:17.107 ************************************ 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=9yqdiw5mif2cocxk7gremab2ebojqij9 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=nufkyudjqfvf35fq7rrixx2cnbdyb45n 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 9yqdiw5mif2cocxk7gremab2ebojqij9 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s nufkyudjqfvf35fq7rrixx2cnbdyb45n 00:07:17.107 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:17.107 [2024-07-15 18:57:44.225363] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:17.107 [2024-07-15 18:57:44.225465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63351 ] 00:07:17.107 [2024-07-15 18:57:44.365737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.365 [2024-07-15 18:57:44.472962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.365 [2024-07-15 18:57:44.529434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.625  Copying: 32/32 [B] (average 31 kBps) 00:07:17.625 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ nufkyudjqfvf35fq7rrixx2cnbdyb45n9yqdiw5mif2cocxk7gremab2ebojqij9 == \n\u\f\k\y\u\d\j\q\f\v\f\3\5\f\q\7\r\r\i\x\x\2\c\n\b\d\y\b\4\5\n\9\y\q\d\i\w\5\m\i\f\2\c\o\c\x\k\7\g\r\e\m\a\b\2\e\b\o\j\q\i\j\9 ]] 00:07:17.625 00:07:17.625 real 0m0.626s 00:07:17.625 user 0m0.353s 00:07:17.625 sys 0m0.152s 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:17.625 ************************************ 00:07:17.625 END TEST dd_flag_append_forced_aio 00:07:17.625 ************************************ 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.625 ************************************ 00:07:17.625 START TEST dd_flag_directory_forced_aio 00:07:17.625 ************************************ 00:07:17.625 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.626 18:57:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.626 [2024-07-15 18:57:44.901137] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:17.626 [2024-07-15 18:57:44.901229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:07:17.905 [2024-07-15 18:57:45.042575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.905 [2024-07-15 18:57:45.153861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.167 [2024-07-15 18:57:45.207103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.167 [2024-07-15 18:57:45.240558] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.167 [2024-07-15 18:57:45.240603] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.167 [2024-07-15 18:57:45.240618] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.167 [2024-07-15 18:57:45.354732] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.426 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:18.426 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.426 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:18.426 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.427 18:57:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:18.427 [2024-07-15 18:57:45.509688] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:18.427 [2024-07-15 18:57:45.509791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63393 ] 00:07:18.427 [2024-07-15 18:57:45.644371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.685 [2024-07-15 18:57:45.769657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.686 [2024-07-15 18:57:45.826313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.686 [2024-07-15 18:57:45.861733] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.686 [2024-07-15 18:57:45.861788] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:18.686 [2024-07-15 18:57:45.861803] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.945 [2024-07-15 18:57:45.980494] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.945 00:07:18.945 real 0m1.250s 00:07:18.945 user 0m0.737s 00:07:18.945 sys 0m0.301s 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.945 ************************************ 00:07:18.945 END TEST dd_flag_directory_forced_aio 00:07:18.945 ************************************ 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:18.945 ************************************ 00:07:18.945 START TEST dd_flag_nofollow_forced_aio 00:07:18.945 ************************************ 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.945 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.945 [2024-07-15 18:57:46.218664] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:18.945 [2024-07-15 18:57:46.218766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63416 ] 00:07:19.203 [2024-07-15 18:57:46.358789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.203 [2024-07-15 18:57:46.474978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.462 [2024-07-15 18:57:46.531956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.462 [2024-07-15 18:57:46.566438] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:19.462 [2024-07-15 18:57:46.566492] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:19.462 [2024-07-15 18:57:46.566555] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.462 [2024-07-15 18:57:46.683406] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.720 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.721 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.721 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.721 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.721 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.721 18:57:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:19.721 [2024-07-15 18:57:46.833122] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:19.721 [2024-07-15 18:57:46.833200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63431 ] 00:07:19.721 [2024-07-15 18:57:46.966787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.979 [2024-07-15 18:57:47.079293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.979 [2024-07-15 18:57:47.136752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.979 [2024-07-15 18:57:47.171297] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.979 [2024-07-15 18:57:47.171366] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:19.979 [2024-07-15 18:57:47.171398] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.237 [2024-07-15 18:57:47.288227] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:20.237 18:57:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.237 [2024-07-15 18:57:47.446761] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:20.237 [2024-07-15 18:57:47.446996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:07:20.495 [2024-07-15 18:57:47.580009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.495 [2024-07-15 18:57:47.693435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.495 [2024-07-15 18:57:47.749057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.754  Copying: 512/512 [B] (average 500 kBps) 00:07:20.754 00:07:20.754 ************************************ 00:07:20.754 END TEST dd_flag_nofollow_forced_aio 00:07:20.754 ************************************ 00:07:20.754 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ o8uxsrppcnb0r3ty39pijvevdqvqjzt6y2db0ucry5j032e8vb8kbn273diz3g1ikqv3aajsw812og6hqspm6135e8q9bprhi2uw6xt02dx39id5dolwbfy1vp9qoitix797dhi4dn98q77zc2k3pt87xwxlz5ojjpryk96oxq1e32x2ze3lgbiz06mc0ncef3i09sgi6vxr41ke2ay5qtx6mzbfzmlc85j8vlweoptm2itlyqnpro7dnhqz3ngzjyvsalmhfgj5lb5rg8zqx2meixpq9k0cqr6duycpjxpffslziu6tf5ar2c2d1q2i47gzsn1ircm327rh31ca4glwxmeakymk0lyox29jobfeto1vvuejybdtqtx1kcgyge9n3jonmwutgm7gjg9z1q20wr0wlgwfbz2rvq7g6xql78qtig1ct2i3bc9thpr6xdc7g0evlpplyexq4g1aetgw46s8lqz9id92faazujx6ccynae2b1cckwpxzfopq == \o\8\u\x\s\r\p\p\c\n\b\0\r\3\t\y\3\9\p\i\j\v\e\v\d\q\v\q\j\z\t\6\y\2\d\b\0\u\c\r\y\5\j\0\3\2\e\8\v\b\8\k\b\n\2\7\3\d\i\z\3\g\1\i\k\q\v\3\a\a\j\s\w\8\1\2\o\g\6\h\q\s\p\m\6\1\3\5\e\8\q\9\b\p\r\h\i\2\u\w\6\x\t\0\2\d\x\3\9\i\d\5\d\o\l\w\b\f\y\1\v\p\9\q\o\i\t\i\x\7\9\7\d\h\i\4\d\n\9\8\q\7\7\z\c\2\k\3\p\t\8\7\x\w\x\l\z\5\o\j\j\p\r\y\k\9\6\o\x\q\1\e\3\2\x\2\z\e\3\l\g\b\i\z\0\6\m\c\0\n\c\e\f\3\i\0\9\s\g\i\6\v\x\r\4\1\k\e\2\a\y\5\q\t\x\6\m\z\b\f\z\m\l\c\8\5\j\8\v\l\w\e\o\p\t\m\2\i\t\l\y\q\n\p\r\o\7\d\n\h\q\z\3\n\g\z\j\y\v\s\a\l\m\h\f\g\j\5\l\b\5\r\g\8\z\q\x\2\m\e\i\x\p\q\9\k\0\c\q\r\6\d\u\y\c\p\j\x\p\f\f\s\l\z\i\u\6\t\f\5\a\r\2\c\2\d\1\q\2\i\4\7\g\z\s\n\1\i\r\c\m\3\2\7\r\h\3\1\c\a\4\g\l\w\x\m\e\a\k\y\m\k\0\l\y\o\x\2\9\j\o\b\f\e\t\o\1\v\v\u\e\j\y\b\d\t\q\t\x\1\k\c\g\y\g\e\9\n\3\j\o\n\m\w\u\t\g\m\7\g\j\g\9\z\1\q\2\0\w\r\0\w\l\g\w\f\b\z\2\r\v\q\7\g\6\x\q\l\7\8\q\t\i\g\1\c\t\2\i\3\b\c\9\t\h\p\r\6\x\d\c\7\g\0\e\v\l\p\p\l\y\e\x\q\4\g\1\a\e\t\g\w\4\6\s\8\l\q\z\9\i\d\9\2\f\a\a\z\u\j\x\6\c\c\y\n\a\e\2\b\1\c\c\k\w\p\x\z\f\o\p\q ]] 00:07:20.754 00:07:20.754 real 0m1.882s 00:07:20.754 user 0m1.089s 00:07:20.754 sys 0m0.463s 00:07:20.754 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.754 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 ************************************ 00:07:21.012 START TEST dd_flag_noatime_forced_aio 00:07:21.012 ************************************ 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721069867 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721069868 00:07:21.012 18:57:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:21.947 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.947 [2024-07-15 18:57:49.165521] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:21.947 [2024-07-15 18:57:49.165611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:07:22.206 [2024-07-15 18:57:49.308166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.206 [2024-07-15 18:57:49.451234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.465 [2024-07-15 18:57:49.509116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.725  Copying: 512/512 [B] (average 500 kBps) 00:07:22.725 00:07:22.725 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.725 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721069867 )) 00:07:22.725 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.725 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721069868 )) 00:07:22.725 18:57:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.725 [2024-07-15 18:57:49.848345] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:22.725 [2024-07-15 18:57:49.848450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63496 ] 00:07:22.725 [2024-07-15 18:57:49.987914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.984 [2024-07-15 18:57:50.097323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.984 [2024-07-15 18:57:50.153961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.243  Copying: 512/512 [B] (average 500 kBps) 00:07:23.243 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.243 ************************************ 00:07:23.243 END TEST dd_flag_noatime_forced_aio 00:07:23.243 ************************************ 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721069870 )) 00:07:23.243 00:07:23.243 real 0m2.351s 00:07:23.243 user 0m0.777s 00:07:23.243 sys 0m0.331s 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.243 ************************************ 00:07:23.243 START TEST dd_flags_misc_forced_aio 00:07:23.243 ************************************ 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:23.243 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:23.244 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.244 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.244 18:57:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:23.244 [2024-07-15 18:57:50.531159] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:23.244 [2024-07-15 18:57:50.531247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63528 ] 00:07:23.572 [2024-07-15 18:57:50.662439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.572 [2024-07-15 18:57:50.758339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.572 [2024-07-15 18:57:50.810398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.833  Copying: 512/512 [B] (average 500 kBps) 00:07:23.833 00:07:23.833 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 96o65dchkvg0rt7js51by9t6bj1t0b2zw32io00bolgf9lg021yw4n9i0j3rilapjdpp3dts76c2qoapz4kxg75yff3c831c6ac57sei0vdkl648hsq4tlolfzzpedh4og9vznuypx70y4yp570vr3fzuwlalvb2r15bg7t1x80jeehztvjdlu01iuw3jjmk9s4gje2vjdppd853wcrlwwwyxmkinoakdtfhnlfloxi8hnch36siajmb3v3obnimpd0h8fdmbtoyuv00s6t81uq5s8ecnli5wjayh97o4a92flqwrxjz1xhkf1twzg0nodu80xjj2dwyf78cu5qoghfhkaoxtpb5xyhmfuv8hi9wtepyyccaqleu19rxn3j3texquko3qb1tj81ptwi42xquet1t64auk52inyws0mlmwnc2urt992j9kmgcpp2gziqaw3lcveoh4z6w4md3mflwvv7dqn2km6ubc39we5k0r3xp9q2cjrv934korukf == \9\6\o\6\5\d\c\h\k\v\g\0\r\t\7\j\s\5\1\b\y\9\t\6\b\j\1\t\0\b\2\z\w\3\2\i\o\0\0\b\o\l\g\f\9\l\g\0\2\1\y\w\4\n\9\i\0\j\3\r\i\l\a\p\j\d\p\p\3\d\t\s\7\6\c\2\q\o\a\p\z\4\k\x\g\7\5\y\f\f\3\c\8\3\1\c\6\a\c\5\7\s\e\i\0\v\d\k\l\6\4\8\h\s\q\4\t\l\o\l\f\z\z\p\e\d\h\4\o\g\9\v\z\n\u\y\p\x\7\0\y\4\y\p\5\7\0\v\r\3\f\z\u\w\l\a\l\v\b\2\r\1\5\b\g\7\t\1\x\8\0\j\e\e\h\z\t\v\j\d\l\u\0\1\i\u\w\3\j\j\m\k\9\s\4\g\j\e\2\v\j\d\p\p\d\8\5\3\w\c\r\l\w\w\w\y\x\m\k\i\n\o\a\k\d\t\f\h\n\l\f\l\o\x\i\8\h\n\c\h\3\6\s\i\a\j\m\b\3\v\3\o\b\n\i\m\p\d\0\h\8\f\d\m\b\t\o\y\u\v\0\0\s\6\t\8\1\u\q\5\s\8\e\c\n\l\i\5\w\j\a\y\h\9\7\o\4\a\9\2\f\l\q\w\r\x\j\z\1\x\h\k\f\1\t\w\z\g\0\n\o\d\u\8\0\x\j\j\2\d\w\y\f\7\8\c\u\5\q\o\g\h\f\h\k\a\o\x\t\p\b\5\x\y\h\m\f\u\v\8\h\i\9\w\t\e\p\y\y\c\c\a\q\l\e\u\1\9\r\x\n\3\j\3\t\e\x\q\u\k\o\3\q\b\1\t\j\8\1\p\t\w\i\4\2\x\q\u\e\t\1\t\6\4\a\u\k\5\2\i\n\y\w\s\0\m\l\m\w\n\c\2\u\r\t\9\9\2\j\9\k\m\g\c\p\p\2\g\z\i\q\a\w\3\l\c\v\e\o\h\4\z\6\w\4\m\d\3\m\f\l\w\v\v\7\d\q\n\2\k\m\6\u\b\c\3\9\w\e\5\k\0\r\3\x\p\9\q\2\c\j\r\v\9\3\4\k\o\r\u\k\f ]] 00:07:23.833 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.833 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:24.092 [2024-07-15 18:57:51.149439] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:24.092 [2024-07-15 18:57:51.149542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63530 ] 00:07:24.092 [2024-07-15 18:57:51.280829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.351 [2024-07-15 18:57:51.391465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.351 [2024-07-15 18:57:51.446940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.610  Copying: 512/512 [B] (average 500 kBps) 00:07:24.610 00:07:24.610 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 96o65dchkvg0rt7js51by9t6bj1t0b2zw32io00bolgf9lg021yw4n9i0j3rilapjdpp3dts76c2qoapz4kxg75yff3c831c6ac57sei0vdkl648hsq4tlolfzzpedh4og9vznuypx70y4yp570vr3fzuwlalvb2r15bg7t1x80jeehztvjdlu01iuw3jjmk9s4gje2vjdppd853wcrlwwwyxmkinoakdtfhnlfloxi8hnch36siajmb3v3obnimpd0h8fdmbtoyuv00s6t81uq5s8ecnli5wjayh97o4a92flqwrxjz1xhkf1twzg0nodu80xjj2dwyf78cu5qoghfhkaoxtpb5xyhmfuv8hi9wtepyyccaqleu19rxn3j3texquko3qb1tj81ptwi42xquet1t64auk52inyws0mlmwnc2urt992j9kmgcpp2gziqaw3lcveoh4z6w4md3mflwvv7dqn2km6ubc39we5k0r3xp9q2cjrv934korukf == \9\6\o\6\5\d\c\h\k\v\g\0\r\t\7\j\s\5\1\b\y\9\t\6\b\j\1\t\0\b\2\z\w\3\2\i\o\0\0\b\o\l\g\f\9\l\g\0\2\1\y\w\4\n\9\i\0\j\3\r\i\l\a\p\j\d\p\p\3\d\t\s\7\6\c\2\q\o\a\p\z\4\k\x\g\7\5\y\f\f\3\c\8\3\1\c\6\a\c\5\7\s\e\i\0\v\d\k\l\6\4\8\h\s\q\4\t\l\o\l\f\z\z\p\e\d\h\4\o\g\9\v\z\n\u\y\p\x\7\0\y\4\y\p\5\7\0\v\r\3\f\z\u\w\l\a\l\v\b\2\r\1\5\b\g\7\t\1\x\8\0\j\e\e\h\z\t\v\j\d\l\u\0\1\i\u\w\3\j\j\m\k\9\s\4\g\j\e\2\v\j\d\p\p\d\8\5\3\w\c\r\l\w\w\w\y\x\m\k\i\n\o\a\k\d\t\f\h\n\l\f\l\o\x\i\8\h\n\c\h\3\6\s\i\a\j\m\b\3\v\3\o\b\n\i\m\p\d\0\h\8\f\d\m\b\t\o\y\u\v\0\0\s\6\t\8\1\u\q\5\s\8\e\c\n\l\i\5\w\j\a\y\h\9\7\o\4\a\9\2\f\l\q\w\r\x\j\z\1\x\h\k\f\1\t\w\z\g\0\n\o\d\u\8\0\x\j\j\2\d\w\y\f\7\8\c\u\5\q\o\g\h\f\h\k\a\o\x\t\p\b\5\x\y\h\m\f\u\v\8\h\i\9\w\t\e\p\y\y\c\c\a\q\l\e\u\1\9\r\x\n\3\j\3\t\e\x\q\u\k\o\3\q\b\1\t\j\8\1\p\t\w\i\4\2\x\q\u\e\t\1\t\6\4\a\u\k\5\2\i\n\y\w\s\0\m\l\m\w\n\c\2\u\r\t\9\9\2\j\9\k\m\g\c\p\p\2\g\z\i\q\a\w\3\l\c\v\e\o\h\4\z\6\w\4\m\d\3\m\f\l\w\v\v\7\d\q\n\2\k\m\6\u\b\c\3\9\w\e\5\k\0\r\3\x\p\9\q\2\c\j\r\v\9\3\4\k\o\r\u\k\f ]] 00:07:24.610 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:24.610 18:57:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:24.610 [2024-07-15 18:57:51.801333] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:24.610 [2024-07-15 18:57:51.801476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63543 ] 00:07:24.869 [2024-07-15 18:57:51.944356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.869 [2024-07-15 18:57:52.047978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.869 [2024-07-15 18:57:52.104909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.129  Copying: 512/512 [B] (average 166 kBps) 00:07:25.129 00:07:25.129 18:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 96o65dchkvg0rt7js51by9t6bj1t0b2zw32io00bolgf9lg021yw4n9i0j3rilapjdpp3dts76c2qoapz4kxg75yff3c831c6ac57sei0vdkl648hsq4tlolfzzpedh4og9vznuypx70y4yp570vr3fzuwlalvb2r15bg7t1x80jeehztvjdlu01iuw3jjmk9s4gje2vjdppd853wcrlwwwyxmkinoakdtfhnlfloxi8hnch36siajmb3v3obnimpd0h8fdmbtoyuv00s6t81uq5s8ecnli5wjayh97o4a92flqwrxjz1xhkf1twzg0nodu80xjj2dwyf78cu5qoghfhkaoxtpb5xyhmfuv8hi9wtepyyccaqleu19rxn3j3texquko3qb1tj81ptwi42xquet1t64auk52inyws0mlmwnc2urt992j9kmgcpp2gziqaw3lcveoh4z6w4md3mflwvv7dqn2km6ubc39we5k0r3xp9q2cjrv934korukf == \9\6\o\6\5\d\c\h\k\v\g\0\r\t\7\j\s\5\1\b\y\9\t\6\b\j\1\t\0\b\2\z\w\3\2\i\o\0\0\b\o\l\g\f\9\l\g\0\2\1\y\w\4\n\9\i\0\j\3\r\i\l\a\p\j\d\p\p\3\d\t\s\7\6\c\2\q\o\a\p\z\4\k\x\g\7\5\y\f\f\3\c\8\3\1\c\6\a\c\5\7\s\e\i\0\v\d\k\l\6\4\8\h\s\q\4\t\l\o\l\f\z\z\p\e\d\h\4\o\g\9\v\z\n\u\y\p\x\7\0\y\4\y\p\5\7\0\v\r\3\f\z\u\w\l\a\l\v\b\2\r\1\5\b\g\7\t\1\x\8\0\j\e\e\h\z\t\v\j\d\l\u\0\1\i\u\w\3\j\j\m\k\9\s\4\g\j\e\2\v\j\d\p\p\d\8\5\3\w\c\r\l\w\w\w\y\x\m\k\i\n\o\a\k\d\t\f\h\n\l\f\l\o\x\i\8\h\n\c\h\3\6\s\i\a\j\m\b\3\v\3\o\b\n\i\m\p\d\0\h\8\f\d\m\b\t\o\y\u\v\0\0\s\6\t\8\1\u\q\5\s\8\e\c\n\l\i\5\w\j\a\y\h\9\7\o\4\a\9\2\f\l\q\w\r\x\j\z\1\x\h\k\f\1\t\w\z\g\0\n\o\d\u\8\0\x\j\j\2\d\w\y\f\7\8\c\u\5\q\o\g\h\f\h\k\a\o\x\t\p\b\5\x\y\h\m\f\u\v\8\h\i\9\w\t\e\p\y\y\c\c\a\q\l\e\u\1\9\r\x\n\3\j\3\t\e\x\q\u\k\o\3\q\b\1\t\j\8\1\p\t\w\i\4\2\x\q\u\e\t\1\t\6\4\a\u\k\5\2\i\n\y\w\s\0\m\l\m\w\n\c\2\u\r\t\9\9\2\j\9\k\m\g\c\p\p\2\g\z\i\q\a\w\3\l\c\v\e\o\h\4\z\6\w\4\m\d\3\m\f\l\w\v\v\7\d\q\n\2\k\m\6\u\b\c\3\9\w\e\5\k\0\r\3\x\p\9\q\2\c\j\r\v\9\3\4\k\o\r\u\k\f ]] 00:07:25.129 18:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.129 18:57:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:25.388 [2024-07-15 18:57:52.461770] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:25.388 [2024-07-15 18:57:52.462806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:07:25.388 [2024-07-15 18:57:52.601113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.647 [2024-07-15 18:57:52.710866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.647 [2024-07-15 18:57:52.766889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.906  Copying: 512/512 [B] (average 500 kBps) 00:07:25.906 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 96o65dchkvg0rt7js51by9t6bj1t0b2zw32io00bolgf9lg021yw4n9i0j3rilapjdpp3dts76c2qoapz4kxg75yff3c831c6ac57sei0vdkl648hsq4tlolfzzpedh4og9vznuypx70y4yp570vr3fzuwlalvb2r15bg7t1x80jeehztvjdlu01iuw3jjmk9s4gje2vjdppd853wcrlwwwyxmkinoakdtfhnlfloxi8hnch36siajmb3v3obnimpd0h8fdmbtoyuv00s6t81uq5s8ecnli5wjayh97o4a92flqwrxjz1xhkf1twzg0nodu80xjj2dwyf78cu5qoghfhkaoxtpb5xyhmfuv8hi9wtepyyccaqleu19rxn3j3texquko3qb1tj81ptwi42xquet1t64auk52inyws0mlmwnc2urt992j9kmgcpp2gziqaw3lcveoh4z6w4md3mflwvv7dqn2km6ubc39we5k0r3xp9q2cjrv934korukf == \9\6\o\6\5\d\c\h\k\v\g\0\r\t\7\j\s\5\1\b\y\9\t\6\b\j\1\t\0\b\2\z\w\3\2\i\o\0\0\b\o\l\g\f\9\l\g\0\2\1\y\w\4\n\9\i\0\j\3\r\i\l\a\p\j\d\p\p\3\d\t\s\7\6\c\2\q\o\a\p\z\4\k\x\g\7\5\y\f\f\3\c\8\3\1\c\6\a\c\5\7\s\e\i\0\v\d\k\l\6\4\8\h\s\q\4\t\l\o\l\f\z\z\p\e\d\h\4\o\g\9\v\z\n\u\y\p\x\7\0\y\4\y\p\5\7\0\v\r\3\f\z\u\w\l\a\l\v\b\2\r\1\5\b\g\7\t\1\x\8\0\j\e\e\h\z\t\v\j\d\l\u\0\1\i\u\w\3\j\j\m\k\9\s\4\g\j\e\2\v\j\d\p\p\d\8\5\3\w\c\r\l\w\w\w\y\x\m\k\i\n\o\a\k\d\t\f\h\n\l\f\l\o\x\i\8\h\n\c\h\3\6\s\i\a\j\m\b\3\v\3\o\b\n\i\m\p\d\0\h\8\f\d\m\b\t\o\y\u\v\0\0\s\6\t\8\1\u\q\5\s\8\e\c\n\l\i\5\w\j\a\y\h\9\7\o\4\a\9\2\f\l\q\w\r\x\j\z\1\x\h\k\f\1\t\w\z\g\0\n\o\d\u\8\0\x\j\j\2\d\w\y\f\7\8\c\u\5\q\o\g\h\f\h\k\a\o\x\t\p\b\5\x\y\h\m\f\u\v\8\h\i\9\w\t\e\p\y\y\c\c\a\q\l\e\u\1\9\r\x\n\3\j\3\t\e\x\q\u\k\o\3\q\b\1\t\j\8\1\p\t\w\i\4\2\x\q\u\e\t\1\t\6\4\a\u\k\5\2\i\n\y\w\s\0\m\l\m\w\n\c\2\u\r\t\9\9\2\j\9\k\m\g\c\p\p\2\g\z\i\q\a\w\3\l\c\v\e\o\h\4\z\6\w\4\m\d\3\m\f\l\w\v\v\7\d\q\n\2\k\m\6\u\b\c\3\9\w\e\5\k\0\r\3\x\p\9\q\2\c\j\r\v\9\3\4\k\o\r\u\k\f ]] 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.906 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:25.906 [2024-07-15 18:57:53.111992] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:25.906 [2024-07-15 18:57:53.112090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63558 ] 00:07:26.165 [2024-07-15 18:57:53.250056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.165 [2024-07-15 18:57:53.334996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.165 [2024-07-15 18:57:53.386753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.422  Copying: 512/512 [B] (average 500 kBps) 00:07:26.422 00:07:26.423 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l8ynzot0enm0j0w49emdsozngjfwxwghihbm22kqk63fs1oxdpnayouxl4blcne7maafeiarl7wikpgkzmyqz4bg1zqdqnqgjk0i2usvhj0v4saon1courc843yhyphkudvzq6xjfxf0f0s13y6el61lj70cw4souyphtnppqizpry7nhrphw4nqp3ksh62hs798df70t5ie2um9vo0j25bteytahgadfy8ero4guxjwqker5719d00rg0z38mi0g6mokcq3kza5e55rldggpco3hezduhu8yu2ys7jllgw5ta8kl9ocsenu1958aewlgpwu6jn4qptu8az8urvsjj6mqfsn0nytaojk00j3ipxfiwuy6bcvjnin32anukwspa8glcmsa99y8i7hcoba8zuc56zrpzwuaskytsebi1omvcvuhb8rsu6bmolrf1hci8ptyeh6ro04ff537ikeuu4632xg8py7jqaoa00ltf745nu4v67f342ahsw1z3ii == \l\8\y\n\z\o\t\0\e\n\m\0\j\0\w\4\9\e\m\d\s\o\z\n\g\j\f\w\x\w\g\h\i\h\b\m\2\2\k\q\k\6\3\f\s\1\o\x\d\p\n\a\y\o\u\x\l\4\b\l\c\n\e\7\m\a\a\f\e\i\a\r\l\7\w\i\k\p\g\k\z\m\y\q\z\4\b\g\1\z\q\d\q\n\q\g\j\k\0\i\2\u\s\v\h\j\0\v\4\s\a\o\n\1\c\o\u\r\c\8\4\3\y\h\y\p\h\k\u\d\v\z\q\6\x\j\f\x\f\0\f\0\s\1\3\y\6\e\l\6\1\l\j\7\0\c\w\4\s\o\u\y\p\h\t\n\p\p\q\i\z\p\r\y\7\n\h\r\p\h\w\4\n\q\p\3\k\s\h\6\2\h\s\7\9\8\d\f\7\0\t\5\i\e\2\u\m\9\v\o\0\j\2\5\b\t\e\y\t\a\h\g\a\d\f\y\8\e\r\o\4\g\u\x\j\w\q\k\e\r\5\7\1\9\d\0\0\r\g\0\z\3\8\m\i\0\g\6\m\o\k\c\q\3\k\z\a\5\e\5\5\r\l\d\g\g\p\c\o\3\h\e\z\d\u\h\u\8\y\u\2\y\s\7\j\l\l\g\w\5\t\a\8\k\l\9\o\c\s\e\n\u\1\9\5\8\a\e\w\l\g\p\w\u\6\j\n\4\q\p\t\u\8\a\z\8\u\r\v\s\j\j\6\m\q\f\s\n\0\n\y\t\a\o\j\k\0\0\j\3\i\p\x\f\i\w\u\y\6\b\c\v\j\n\i\n\3\2\a\n\u\k\w\s\p\a\8\g\l\c\m\s\a\9\9\y\8\i\7\h\c\o\b\a\8\z\u\c\5\6\z\r\p\z\w\u\a\s\k\y\t\s\e\b\i\1\o\m\v\c\v\u\h\b\8\r\s\u\6\b\m\o\l\r\f\1\h\c\i\8\p\t\y\e\h\6\r\o\0\4\f\f\5\3\7\i\k\e\u\u\4\6\3\2\x\g\8\p\y\7\j\q\a\o\a\0\0\l\t\f\7\4\5\n\u\4\v\6\7\f\3\4\2\a\h\s\w\1\z\3\i\i ]] 00:07:26.423 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.423 18:57:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:26.423 [2024-07-15 18:57:53.688546] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:26.423 [2024-07-15 18:57:53.688664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 00:07:26.680 [2024-07-15 18:57:53.827428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.680 [2024-07-15 18:57:53.936325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.938 [2024-07-15 18:57:53.989377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.196  Copying: 512/512 [B] (average 500 kBps) 00:07:27.196 00:07:27.197 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l8ynzot0enm0j0w49emdsozngjfwxwghihbm22kqk63fs1oxdpnayouxl4blcne7maafeiarl7wikpgkzmyqz4bg1zqdqnqgjk0i2usvhj0v4saon1courc843yhyphkudvzq6xjfxf0f0s13y6el61lj70cw4souyphtnppqizpry7nhrphw4nqp3ksh62hs798df70t5ie2um9vo0j25bteytahgadfy8ero4guxjwqker5719d00rg0z38mi0g6mokcq3kza5e55rldggpco3hezduhu8yu2ys7jllgw5ta8kl9ocsenu1958aewlgpwu6jn4qptu8az8urvsjj6mqfsn0nytaojk00j3ipxfiwuy6bcvjnin32anukwspa8glcmsa99y8i7hcoba8zuc56zrpzwuaskytsebi1omvcvuhb8rsu6bmolrf1hci8ptyeh6ro04ff537ikeuu4632xg8py7jqaoa00ltf745nu4v67f342ahsw1z3ii == \l\8\y\n\z\o\t\0\e\n\m\0\j\0\w\4\9\e\m\d\s\o\z\n\g\j\f\w\x\w\g\h\i\h\b\m\2\2\k\q\k\6\3\f\s\1\o\x\d\p\n\a\y\o\u\x\l\4\b\l\c\n\e\7\m\a\a\f\e\i\a\r\l\7\w\i\k\p\g\k\z\m\y\q\z\4\b\g\1\z\q\d\q\n\q\g\j\k\0\i\2\u\s\v\h\j\0\v\4\s\a\o\n\1\c\o\u\r\c\8\4\3\y\h\y\p\h\k\u\d\v\z\q\6\x\j\f\x\f\0\f\0\s\1\3\y\6\e\l\6\1\l\j\7\0\c\w\4\s\o\u\y\p\h\t\n\p\p\q\i\z\p\r\y\7\n\h\r\p\h\w\4\n\q\p\3\k\s\h\6\2\h\s\7\9\8\d\f\7\0\t\5\i\e\2\u\m\9\v\o\0\j\2\5\b\t\e\y\t\a\h\g\a\d\f\y\8\e\r\o\4\g\u\x\j\w\q\k\e\r\5\7\1\9\d\0\0\r\g\0\z\3\8\m\i\0\g\6\m\o\k\c\q\3\k\z\a\5\e\5\5\r\l\d\g\g\p\c\o\3\h\e\z\d\u\h\u\8\y\u\2\y\s\7\j\l\l\g\w\5\t\a\8\k\l\9\o\c\s\e\n\u\1\9\5\8\a\e\w\l\g\p\w\u\6\j\n\4\q\p\t\u\8\a\z\8\u\r\v\s\j\j\6\m\q\f\s\n\0\n\y\t\a\o\j\k\0\0\j\3\i\p\x\f\i\w\u\y\6\b\c\v\j\n\i\n\3\2\a\n\u\k\w\s\p\a\8\g\l\c\m\s\a\9\9\y\8\i\7\h\c\o\b\a\8\z\u\c\5\6\z\r\p\z\w\u\a\s\k\y\t\s\e\b\i\1\o\m\v\c\v\u\h\b\8\r\s\u\6\b\m\o\l\r\f\1\h\c\i\8\p\t\y\e\h\6\r\o\0\4\f\f\5\3\7\i\k\e\u\u\4\6\3\2\x\g\8\p\y\7\j\q\a\o\a\0\0\l\t\f\7\4\5\n\u\4\v\6\7\f\3\4\2\a\h\s\w\1\z\3\i\i ]] 00:07:27.197 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.197 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:27.197 [2024-07-15 18:57:54.311193] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:27.197 [2024-07-15 18:57:54.311290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63574 ] 00:07:27.197 [2024-07-15 18:57:54.444807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.455 [2024-07-15 18:57:54.531641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.455 [2024-07-15 18:57:54.582107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.713  Copying: 512/512 [B] (average 166 kBps) 00:07:27.713 00:07:27.713 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l8ynzot0enm0j0w49emdsozngjfwxwghihbm22kqk63fs1oxdpnayouxl4blcne7maafeiarl7wikpgkzmyqz4bg1zqdqnqgjk0i2usvhj0v4saon1courc843yhyphkudvzq6xjfxf0f0s13y6el61lj70cw4souyphtnppqizpry7nhrphw4nqp3ksh62hs798df70t5ie2um9vo0j25bteytahgadfy8ero4guxjwqker5719d00rg0z38mi0g6mokcq3kza5e55rldggpco3hezduhu8yu2ys7jllgw5ta8kl9ocsenu1958aewlgpwu6jn4qptu8az8urvsjj6mqfsn0nytaojk00j3ipxfiwuy6bcvjnin32anukwspa8glcmsa99y8i7hcoba8zuc56zrpzwuaskytsebi1omvcvuhb8rsu6bmolrf1hci8ptyeh6ro04ff537ikeuu4632xg8py7jqaoa00ltf745nu4v67f342ahsw1z3ii == \l\8\y\n\z\o\t\0\e\n\m\0\j\0\w\4\9\e\m\d\s\o\z\n\g\j\f\w\x\w\g\h\i\h\b\m\2\2\k\q\k\6\3\f\s\1\o\x\d\p\n\a\y\o\u\x\l\4\b\l\c\n\e\7\m\a\a\f\e\i\a\r\l\7\w\i\k\p\g\k\z\m\y\q\z\4\b\g\1\z\q\d\q\n\q\g\j\k\0\i\2\u\s\v\h\j\0\v\4\s\a\o\n\1\c\o\u\r\c\8\4\3\y\h\y\p\h\k\u\d\v\z\q\6\x\j\f\x\f\0\f\0\s\1\3\y\6\e\l\6\1\l\j\7\0\c\w\4\s\o\u\y\p\h\t\n\p\p\q\i\z\p\r\y\7\n\h\r\p\h\w\4\n\q\p\3\k\s\h\6\2\h\s\7\9\8\d\f\7\0\t\5\i\e\2\u\m\9\v\o\0\j\2\5\b\t\e\y\t\a\h\g\a\d\f\y\8\e\r\o\4\g\u\x\j\w\q\k\e\r\5\7\1\9\d\0\0\r\g\0\z\3\8\m\i\0\g\6\m\o\k\c\q\3\k\z\a\5\e\5\5\r\l\d\g\g\p\c\o\3\h\e\z\d\u\h\u\8\y\u\2\y\s\7\j\l\l\g\w\5\t\a\8\k\l\9\o\c\s\e\n\u\1\9\5\8\a\e\w\l\g\p\w\u\6\j\n\4\q\p\t\u\8\a\z\8\u\r\v\s\j\j\6\m\q\f\s\n\0\n\y\t\a\o\j\k\0\0\j\3\i\p\x\f\i\w\u\y\6\b\c\v\j\n\i\n\3\2\a\n\u\k\w\s\p\a\8\g\l\c\m\s\a\9\9\y\8\i\7\h\c\o\b\a\8\z\u\c\5\6\z\r\p\z\w\u\a\s\k\y\t\s\e\b\i\1\o\m\v\c\v\u\h\b\8\r\s\u\6\b\m\o\l\r\f\1\h\c\i\8\p\t\y\e\h\6\r\o\0\4\f\f\5\3\7\i\k\e\u\u\4\6\3\2\x\g\8\p\y\7\j\q\a\o\a\0\0\l\t\f\7\4\5\n\u\4\v\6\7\f\3\4\2\a\h\s\w\1\z\3\i\i ]] 00:07:27.713 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.713 18:57:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:27.713 [2024-07-15 18:57:54.893967] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:27.713 [2024-07-15 18:57:54.894253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:07:27.972 [2024-07-15 18:57:55.033738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.972 [2024-07-15 18:57:55.133545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.972 [2024-07-15 18:57:55.189746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.231  Copying: 512/512 [B] (average 500 kBps) 00:07:28.231 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l8ynzot0enm0j0w49emdsozngjfwxwghihbm22kqk63fs1oxdpnayouxl4blcne7maafeiarl7wikpgkzmyqz4bg1zqdqnqgjk0i2usvhj0v4saon1courc843yhyphkudvzq6xjfxf0f0s13y6el61lj70cw4souyphtnppqizpry7nhrphw4nqp3ksh62hs798df70t5ie2um9vo0j25bteytahgadfy8ero4guxjwqker5719d00rg0z38mi0g6mokcq3kza5e55rldggpco3hezduhu8yu2ys7jllgw5ta8kl9ocsenu1958aewlgpwu6jn4qptu8az8urvsjj6mqfsn0nytaojk00j3ipxfiwuy6bcvjnin32anukwspa8glcmsa99y8i7hcoba8zuc56zrpzwuaskytsebi1omvcvuhb8rsu6bmolrf1hci8ptyeh6ro04ff537ikeuu4632xg8py7jqaoa00ltf745nu4v67f342ahsw1z3ii == \l\8\y\n\z\o\t\0\e\n\m\0\j\0\w\4\9\e\m\d\s\o\z\n\g\j\f\w\x\w\g\h\i\h\b\m\2\2\k\q\k\6\3\f\s\1\o\x\d\p\n\a\y\o\u\x\l\4\b\l\c\n\e\7\m\a\a\f\e\i\a\r\l\7\w\i\k\p\g\k\z\m\y\q\z\4\b\g\1\z\q\d\q\n\q\g\j\k\0\i\2\u\s\v\h\j\0\v\4\s\a\o\n\1\c\o\u\r\c\8\4\3\y\h\y\p\h\k\u\d\v\z\q\6\x\j\f\x\f\0\f\0\s\1\3\y\6\e\l\6\1\l\j\7\0\c\w\4\s\o\u\y\p\h\t\n\p\p\q\i\z\p\r\y\7\n\h\r\p\h\w\4\n\q\p\3\k\s\h\6\2\h\s\7\9\8\d\f\7\0\t\5\i\e\2\u\m\9\v\o\0\j\2\5\b\t\e\y\t\a\h\g\a\d\f\y\8\e\r\o\4\g\u\x\j\w\q\k\e\r\5\7\1\9\d\0\0\r\g\0\z\3\8\m\i\0\g\6\m\o\k\c\q\3\k\z\a\5\e\5\5\r\l\d\g\g\p\c\o\3\h\e\z\d\u\h\u\8\y\u\2\y\s\7\j\l\l\g\w\5\t\a\8\k\l\9\o\c\s\e\n\u\1\9\5\8\a\e\w\l\g\p\w\u\6\j\n\4\q\p\t\u\8\a\z\8\u\r\v\s\j\j\6\m\q\f\s\n\0\n\y\t\a\o\j\k\0\0\j\3\i\p\x\f\i\w\u\y\6\b\c\v\j\n\i\n\3\2\a\n\u\k\w\s\p\a\8\g\l\c\m\s\a\9\9\y\8\i\7\h\c\o\b\a\8\z\u\c\5\6\z\r\p\z\w\u\a\s\k\y\t\s\e\b\i\1\o\m\v\c\v\u\h\b\8\r\s\u\6\b\m\o\l\r\f\1\h\c\i\8\p\t\y\e\h\6\r\o\0\4\f\f\5\3\7\i\k\e\u\u\4\6\3\2\x\g\8\p\y\7\j\q\a\o\a\0\0\l\t\f\7\4\5\n\u\4\v\6\7\f\3\4\2\a\h\s\w\1\z\3\i\i ]] 00:07:28.231 00:07:28.231 real 0m4.961s 00:07:28.231 user 0m2.806s 00:07:28.231 sys 0m1.179s 00:07:28.231 ************************************ 00:07:28.231 END TEST dd_flags_misc_forced_aio 00:07:28.231 ************************************ 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:28.231 ************************************ 00:07:28.231 END TEST spdk_dd_posix 00:07:28.231 ************************************ 00:07:28.231 00:07:28.231 real 0m22.714s 00:07:28.231 user 0m11.688s 00:07:28.231 sys 0m6.930s 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.231 18:57:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 18:57:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:28.491 18:57:55 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:28.491 18:57:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.491 18:57:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.491 18:57:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 ************************************ 00:07:28.491 START TEST spdk_dd_malloc 00:07:28.491 ************************************ 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:28.491 * Looking for test storage... 00:07:28.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 ************************************ 00:07:28.491 START TEST dd_malloc_copy 00:07:28.491 ************************************ 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:28.491 18:57:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 [2024-07-15 18:57:55.694889] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:28.492 [2024-07-15 18:57:55.694977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63656 ] 00:07:28.492 { 00:07:28.492 "subsystems": [ 00:07:28.492 { 00:07:28.492 "subsystem": "bdev", 00:07:28.492 "config": [ 00:07:28.492 { 00:07:28.492 "params": { 00:07:28.492 "block_size": 512, 00:07:28.492 "num_blocks": 1048576, 00:07:28.492 "name": "malloc0" 00:07:28.492 }, 00:07:28.492 "method": "bdev_malloc_create" 00:07:28.492 }, 00:07:28.492 { 00:07:28.492 "params": { 00:07:28.492 "block_size": 512, 00:07:28.492 "num_blocks": 1048576, 00:07:28.492 "name": "malloc1" 00:07:28.492 }, 00:07:28.492 "method": "bdev_malloc_create" 00:07:28.492 }, 00:07:28.492 { 00:07:28.492 "method": "bdev_wait_for_examine" 00:07:28.492 } 00:07:28.492 ] 00:07:28.492 } 00:07:28.492 ] 00:07:28.492 } 00:07:28.751 [2024-07-15 18:57:55.830452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.751 [2024-07-15 18:57:55.927478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.751 [2024-07-15 18:57:55.981714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.565  Copying: 194/512 [MB] (194 MBps) Copying: 395/512 [MB] (201 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:07:32.565 00:07:32.565 18:57:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:32.565 18:57:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:32.565 18:57:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.565 18:57:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 [2024-07-15 18:57:59.575920] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:32.565 [2024-07-15 18:57:59.576015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63708 ] 00:07:32.565 { 00:07:32.565 "subsystems": [ 00:07:32.565 { 00:07:32.565 "subsystem": "bdev", 00:07:32.565 "config": [ 00:07:32.565 { 00:07:32.565 "params": { 00:07:32.565 "block_size": 512, 00:07:32.565 "num_blocks": 1048576, 00:07:32.565 "name": "malloc0" 00:07:32.565 }, 00:07:32.565 "method": "bdev_malloc_create" 00:07:32.565 }, 00:07:32.565 { 00:07:32.565 "params": { 00:07:32.565 "block_size": 512, 00:07:32.565 "num_blocks": 1048576, 00:07:32.565 "name": "malloc1" 00:07:32.565 }, 00:07:32.565 "method": "bdev_malloc_create" 00:07:32.565 }, 00:07:32.565 { 00:07:32.565 "method": "bdev_wait_for_examine" 00:07:32.565 } 00:07:32.565 ] 00:07:32.565 } 00:07:32.565 ] 00:07:32.565 } 00:07:32.565 [2024-07-15 18:57:59.714673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.565 [2024-07-15 18:57:59.818890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.824 [2024-07-15 18:57:59.871587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.338  Copying: 198/512 [MB] (198 MBps) Copying: 397/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:07:36.338 00:07:36.338 00:07:36.338 real 0m7.752s 00:07:36.338 user 0m6.732s 00:07:36.338 sys 0m0.854s 00:07:36.338 18:58:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.338 ************************************ 00:07:36.338 END TEST dd_malloc_copy 00:07:36.338 ************************************ 00:07:36.338 18:58:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.338 18:58:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:36.338 00:07:36.338 real 0m7.891s 00:07:36.338 user 0m6.781s 00:07:36.338 sys 0m0.941s 00:07:36.338 18:58:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.338 18:58:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:36.338 ************************************ 00:07:36.338 END TEST spdk_dd_malloc 00:07:36.338 ************************************ 00:07:36.338 18:58:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:36.338 18:58:03 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:36.338 18:58:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.338 18:58:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.338 18:58:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.338 ************************************ 00:07:36.338 START TEST spdk_dd_bdev_to_bdev 00:07:36.338 ************************************ 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:36.338 * Looking for test storage... 00:07:36.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:36.338 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.339 ************************************ 00:07:36.339 START TEST dd_inflate_file 00:07:36.339 ************************************ 00:07:36.339 18:58:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:36.596 [2024-07-15 18:58:03.658849] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:36.596 [2024-07-15 18:58:03.659043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63814 ] 00:07:36.596 [2024-07-15 18:58:03.802187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.854 [2024-07-15 18:58:03.915017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.854 [2024-07-15 18:58:03.967258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.113  Copying: 64/64 [MB] (average 1422 MBps) 00:07:37.113 00:07:37.113 00:07:37.113 real 0m0.678s 00:07:37.113 user 0m0.424s 00:07:37.113 sys 0m0.318s 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:37.113 ************************************ 00:07:37.113 END TEST dd_inflate_file 00:07:37.113 ************************************ 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:37.113 ************************************ 00:07:37.113 START TEST dd_copy_to_out_bdev 00:07:37.113 ************************************ 00:07:37.113 18:58:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:37.113 [2024-07-15 18:58:04.375240] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:37.113 [2024-07-15 18:58:04.375388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63851 ] 00:07:37.113 { 00:07:37.113 "subsystems": [ 00:07:37.113 { 00:07:37.113 "subsystem": "bdev", 00:07:37.113 "config": [ 00:07:37.113 { 00:07:37.113 "params": { 00:07:37.113 "trtype": "pcie", 00:07:37.113 "traddr": "0000:00:10.0", 00:07:37.113 "name": "Nvme0" 00:07:37.113 }, 00:07:37.113 "method": "bdev_nvme_attach_controller" 00:07:37.113 }, 00:07:37.113 { 00:07:37.113 "params": { 00:07:37.113 "trtype": "pcie", 00:07:37.113 "traddr": "0000:00:11.0", 00:07:37.113 "name": "Nvme1" 00:07:37.113 }, 00:07:37.113 "method": "bdev_nvme_attach_controller" 00:07:37.113 }, 00:07:37.113 { 00:07:37.113 "method": "bdev_wait_for_examine" 00:07:37.113 } 00:07:37.113 ] 00:07:37.113 } 00:07:37.113 ] 00:07:37.113 } 00:07:37.372 [2024-07-15 18:58:04.515033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.372 [2024-07-15 18:58:04.623698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.630 [2024-07-15 18:58:04.679569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.083  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 56 MBps) 00:07:39.083 00:07:39.083 00:07:39.083 real 0m1.950s 00:07:39.083 user 0m1.700s 00:07:39.083 sys 0m1.505s 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:39.083 ************************************ 00:07:39.083 END TEST dd_copy_to_out_bdev 00:07:39.083 ************************************ 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:39.083 ************************************ 00:07:39.083 START TEST dd_offset_magic 00:07:39.083 ************************************ 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:39.083 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:39.084 18:58:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:39.341 [2024-07-15 18:58:06.385740] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:39.341 [2024-07-15 18:58:06.385833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:07:39.341 { 00:07:39.341 "subsystems": [ 00:07:39.341 { 00:07:39.341 "subsystem": "bdev", 00:07:39.341 "config": [ 00:07:39.341 { 00:07:39.341 "params": { 00:07:39.341 "trtype": "pcie", 00:07:39.341 "traddr": "0000:00:10.0", 00:07:39.341 "name": "Nvme0" 00:07:39.341 }, 00:07:39.341 "method": "bdev_nvme_attach_controller" 00:07:39.341 }, 00:07:39.341 { 00:07:39.341 "params": { 00:07:39.341 "trtype": "pcie", 00:07:39.341 "traddr": "0000:00:11.0", 00:07:39.341 "name": "Nvme1" 00:07:39.341 }, 00:07:39.341 "method": "bdev_nvme_attach_controller" 00:07:39.341 }, 00:07:39.341 { 00:07:39.341 "method": "bdev_wait_for_examine" 00:07:39.341 } 00:07:39.341 ] 00:07:39.341 } 00:07:39.341 ] 00:07:39.341 } 00:07:39.341 [2024-07-15 18:58:06.525455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.599 [2024-07-15 18:58:06.637928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.599 [2024-07-15 18:58:06.694038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.114  Copying: 65/65 [MB] (average 833 MBps) 00:07:40.114 00:07:40.114 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:40.114 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:40.114 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:40.114 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:40.114 [2024-07-15 18:58:07.258075] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:40.114 [2024-07-15 18:58:07.258187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63912 ] 00:07:40.114 { 00:07:40.114 "subsystems": [ 00:07:40.114 { 00:07:40.114 "subsystem": "bdev", 00:07:40.114 "config": [ 00:07:40.114 { 00:07:40.114 "params": { 00:07:40.114 "trtype": "pcie", 00:07:40.114 "traddr": "0000:00:10.0", 00:07:40.114 "name": "Nvme0" 00:07:40.114 }, 00:07:40.114 "method": "bdev_nvme_attach_controller" 00:07:40.114 }, 00:07:40.114 { 00:07:40.114 "params": { 00:07:40.114 "trtype": "pcie", 00:07:40.114 "traddr": "0000:00:11.0", 00:07:40.114 "name": "Nvme1" 00:07:40.114 }, 00:07:40.114 "method": "bdev_nvme_attach_controller" 00:07:40.114 }, 00:07:40.114 { 00:07:40.114 "method": "bdev_wait_for_examine" 00:07:40.114 } 00:07:40.114 ] 00:07:40.114 } 00:07:40.114 ] 00:07:40.114 } 00:07:40.114 [2024-07-15 18:58:07.395308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.372 [2024-07-15 18:58:07.513259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.372 [2024-07-15 18:58:07.567655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.888  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:40.888 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:40.888 18:58:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:40.888 [2024-07-15 18:58:08.028673] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:40.888 [2024-07-15 18:58:08.028783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63934 ] 00:07:40.888 { 00:07:40.888 "subsystems": [ 00:07:40.888 { 00:07:40.888 "subsystem": "bdev", 00:07:40.888 "config": [ 00:07:40.888 { 00:07:40.888 "params": { 00:07:40.888 "trtype": "pcie", 00:07:40.888 "traddr": "0000:00:10.0", 00:07:40.888 "name": "Nvme0" 00:07:40.888 }, 00:07:40.888 "method": "bdev_nvme_attach_controller" 00:07:40.888 }, 00:07:40.888 { 00:07:40.888 "params": { 00:07:40.888 "trtype": "pcie", 00:07:40.888 "traddr": "0000:00:11.0", 00:07:40.888 "name": "Nvme1" 00:07:40.888 }, 00:07:40.888 "method": "bdev_nvme_attach_controller" 00:07:40.888 }, 00:07:40.888 { 00:07:40.888 "method": "bdev_wait_for_examine" 00:07:40.888 } 00:07:40.888 ] 00:07:40.888 } 00:07:40.888 ] 00:07:40.888 } 00:07:40.888 [2024-07-15 18:58:08.167451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.146 [2024-07-15 18:58:08.284484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.146 [2024-07-15 18:58:08.340014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.662  Copying: 65/65 [MB] (average 970 MBps) 00:07:41.662 00:07:41.662 18:58:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:41.662 18:58:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:41.662 18:58:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.662 18:58:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.662 [2024-07-15 18:58:08.927248] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:41.662 [2024-07-15 18:58:08.927365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63954 ] 00:07:41.662 { 00:07:41.662 "subsystems": [ 00:07:41.662 { 00:07:41.662 "subsystem": "bdev", 00:07:41.662 "config": [ 00:07:41.662 { 00:07:41.662 "params": { 00:07:41.662 "trtype": "pcie", 00:07:41.662 "traddr": "0000:00:10.0", 00:07:41.662 "name": "Nvme0" 00:07:41.662 }, 00:07:41.662 "method": "bdev_nvme_attach_controller" 00:07:41.662 }, 00:07:41.662 { 00:07:41.662 "params": { 00:07:41.662 "trtype": "pcie", 00:07:41.662 "traddr": "0000:00:11.0", 00:07:41.662 "name": "Nvme1" 00:07:41.662 }, 00:07:41.662 "method": "bdev_nvme_attach_controller" 00:07:41.662 }, 00:07:41.662 { 00:07:41.662 "method": "bdev_wait_for_examine" 00:07:41.662 } 00:07:41.662 ] 00:07:41.662 } 00:07:41.662 ] 00:07:41.662 } 00:07:41.921 [2024-07-15 18:58:09.066343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.921 [2024-07-15 18:58:09.165120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.180 [2024-07-15 18:58:09.218618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.440  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:42.440 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:42.440 00:07:42.440 real 0m3.283s 00:07:42.440 user 0m2.414s 00:07:42.440 sys 0m0.944s 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.440 ************************************ 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:42.440 END TEST dd_offset_magic 00:07:42.440 ************************************ 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:42.440 18:58:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.440 [2024-07-15 18:58:09.711472] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:42.440 [2024-07-15 18:58:09.711622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:07:42.440 { 00:07:42.440 "subsystems": [ 00:07:42.440 { 00:07:42.440 "subsystem": "bdev", 00:07:42.440 "config": [ 00:07:42.440 { 00:07:42.440 "params": { 00:07:42.440 "trtype": "pcie", 00:07:42.440 "traddr": "0000:00:10.0", 00:07:42.440 "name": "Nvme0" 00:07:42.440 }, 00:07:42.440 "method": "bdev_nvme_attach_controller" 00:07:42.440 }, 00:07:42.440 { 00:07:42.440 "params": { 00:07:42.440 "trtype": "pcie", 00:07:42.440 "traddr": "0000:00:11.0", 00:07:42.440 "name": "Nvme1" 00:07:42.440 }, 00:07:42.440 "method": "bdev_nvme_attach_controller" 00:07:42.440 }, 00:07:42.440 { 00:07:42.440 "method": "bdev_wait_for_examine" 00:07:42.440 } 00:07:42.440 ] 00:07:42.440 } 00:07:42.440 ] 00:07:42.440 } 00:07:42.698 [2024-07-15 18:58:09.849886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.698 [2024-07-15 18:58:09.958198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.957 [2024-07-15 18:58:10.014008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.216  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:43.216 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.216 18:58:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.216 { 00:07:43.216 "subsystems": [ 00:07:43.216 { 00:07:43.216 "subsystem": "bdev", 00:07:43.216 "config": [ 00:07:43.216 { 00:07:43.216 "params": { 00:07:43.216 "trtype": "pcie", 00:07:43.216 "traddr": "0000:00:10.0", 00:07:43.216 "name": "Nvme0" 00:07:43.216 }, 00:07:43.216 "method": "bdev_nvme_attach_controller" 00:07:43.216 }, 00:07:43.216 { 00:07:43.216 "params": { 00:07:43.216 "trtype": "pcie", 00:07:43.216 "traddr": "0000:00:11.0", 00:07:43.216 "name": "Nvme1" 00:07:43.216 }, 00:07:43.216 "method": "bdev_nvme_attach_controller" 00:07:43.216 }, 00:07:43.216 { 00:07:43.216 "method": "bdev_wait_for_examine" 00:07:43.216 } 00:07:43.216 ] 00:07:43.216 } 00:07:43.216 ] 00:07:43.216 } 00:07:43.216 [2024-07-15 18:58:10.467662] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:43.216 [2024-07-15 18:58:10.467768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64001 ] 00:07:43.475 [2024-07-15 18:58:10.604595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.476 [2024-07-15 18:58:10.704067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.476 [2024-07-15 18:58:10.757893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.992  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:43.992 00:07:43.992 18:58:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:43.992 00:07:43.992 real 0m7.698s 00:07:43.992 user 0m5.705s 00:07:43.992 sys 0m3.483s 00:07:43.992 18:58:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.992 18:58:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.992 ************************************ 00:07:43.992 END TEST spdk_dd_bdev_to_bdev 00:07:43.992 ************************************ 00:07:43.992 18:58:11 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:43.992 18:58:11 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:43.992 18:58:11 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:43.992 18:58:11 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.992 18:58:11 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.992 18:58:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.993 ************************************ 00:07:43.993 START TEST spdk_dd_uring 00:07:43.993 ************************************ 00:07:43.993 18:58:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:44.251 * Looking for test storage... 00:07:44.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:44.251 18:58:11 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.251 18:58:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.251 18:58:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.251 18:58:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.251 18:58:11 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:44.252 ************************************ 00:07:44.252 START TEST dd_uring_copy 00:07:44.252 ************************************ 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=roj3oao0vdx5oqwyvnrro569cswrvt3smlvywqq671lq1dl3h6z5beksdxl4b3frc564vi7iajz7rsj9q3dygcfrqn3y2rv6p3mel4hec7sar4xj57bv5ype8fm79tfykww1d0e772iqvo2fda2kalepr2nqr7eo3f9min6dp4ekxk54u9bwr7v6dtkvx19upfqei5bbde608n82wuve0dl1m8zrexhps6dwy2bn7lul3bw2iclt4xhdcuaptbnebsfmajswu51gbz5yxpqidqnaoc4gd6lsnbk7h3r5kqbu2s3vu5db1j1xdumwl5f7c6lmahk11q63ax1i8dh3mieolxrz0f0tcrggawh3778ce5sjx7hdlaiarsoosa6eripfmnxnoz54vob4onbgtk6ujd729lndco64ppf5sg1jghzesqp1982oc248yjxymm1iu6p4qy4qupzyla71stfull987bf26fnnbzx08br6rkb441gvtfsc37pzmc9eesysxztai00d4l821n14399wz3mgtqxvv6dvw53zio4dhb95esvxnsctblzk6065sz59bcp7ylru3twujr03ihg4jiqh3z9z3xk3cy3tamla7um55g6mscdzzoa3qmusw65lq77250sl9zo5rz33yw3ol11paei5413rbw9wg3vlskg0rkzi6t6l356n5u4x8gq2q4anee6uk6gkaoeq6jnib33kdbcmlq3ye642dw3giuqeoe4xww473ldvd16dttk5u9froidvisupdzghr8g5e1e4i8ou4wjssh653eh78iism9bei8052ig4yg34hpd0ue0kzz11b2gaq1shwil5h2x96ovmy5za3284rf3xrfmheaxlmcnbi9p0rxpmzyn3609qexpayt07gccs0cqggo687np2cjokckfouvmxb2pr0ntozw7i9pzki6ec2h7n0493s4fowb8nj0k81ci84y0o9juv570romya2m88l1jb1qzpb31jcfw5jzkv 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo roj3oao0vdx5oqwyvnrro569cswrvt3smlvywqq671lq1dl3h6z5beksdxl4b3frc564vi7iajz7rsj9q3dygcfrqn3y2rv6p3mel4hec7sar4xj57bv5ype8fm79tfykww1d0e772iqvo2fda2kalepr2nqr7eo3f9min6dp4ekxk54u9bwr7v6dtkvx19upfqei5bbde608n82wuve0dl1m8zrexhps6dwy2bn7lul3bw2iclt4xhdcuaptbnebsfmajswu51gbz5yxpqidqnaoc4gd6lsnbk7h3r5kqbu2s3vu5db1j1xdumwl5f7c6lmahk11q63ax1i8dh3mieolxrz0f0tcrggawh3778ce5sjx7hdlaiarsoosa6eripfmnxnoz54vob4onbgtk6ujd729lndco64ppf5sg1jghzesqp1982oc248yjxymm1iu6p4qy4qupzyla71stfull987bf26fnnbzx08br6rkb441gvtfsc37pzmc9eesysxztai00d4l821n14399wz3mgtqxvv6dvw53zio4dhb95esvxnsctblzk6065sz59bcp7ylru3twujr03ihg4jiqh3z9z3xk3cy3tamla7um55g6mscdzzoa3qmusw65lq77250sl9zo5rz33yw3ol11paei5413rbw9wg3vlskg0rkzi6t6l356n5u4x8gq2q4anee6uk6gkaoeq6jnib33kdbcmlq3ye642dw3giuqeoe4xww473ldvd16dttk5u9froidvisupdzghr8g5e1e4i8ou4wjssh653eh78iism9bei8052ig4yg34hpd0ue0kzz11b2gaq1shwil5h2x96ovmy5za3284rf3xrfmheaxlmcnbi9p0rxpmzyn3609qexpayt07gccs0cqggo687np2cjokckfouvmxb2pr0ntozw7i9pzki6ec2h7n0493s4fowb8nj0k81ci84y0o9juv570romya2m88l1jb1qzpb31jcfw5jzkv 00:07:44.252 18:58:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:44.252 [2024-07-15 18:58:11.423258] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:44.252 [2024-07-15 18:58:11.423378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64071 ] 00:07:44.561 [2024-07-15 18:58:11.562016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.561 [2024-07-15 18:58:11.658206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.561 [2024-07-15 18:58:11.712366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.713  Copying: 511/511 [MB] (average 1038 MBps) 00:07:45.713 00:07:45.713 18:58:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:45.713 18:58:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:45.713 18:58:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.713 18:58:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.713 [2024-07-15 18:58:12.919847] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:45.713 [2024-07-15 18:58:12.919962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64098 ] 00:07:45.713 { 00:07:45.713 "subsystems": [ 00:07:45.713 { 00:07:45.713 "subsystem": "bdev", 00:07:45.713 "config": [ 00:07:45.713 { 00:07:45.713 "params": { 00:07:45.713 "block_size": 512, 00:07:45.713 "num_blocks": 1048576, 00:07:45.713 "name": "malloc0" 00:07:45.713 }, 00:07:45.713 "method": "bdev_malloc_create" 00:07:45.713 }, 00:07:45.713 { 00:07:45.713 "params": { 00:07:45.713 "filename": "/dev/zram1", 00:07:45.713 "name": "uring0" 00:07:45.713 }, 00:07:45.713 "method": "bdev_uring_create" 00:07:45.713 }, 00:07:45.713 { 00:07:45.713 "method": "bdev_wait_for_examine" 00:07:45.713 } 00:07:45.713 ] 00:07:45.713 } 00:07:45.713 ] 00:07:45.713 } 00:07:45.971 [2024-07-15 18:58:13.067121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.971 [2024-07-15 18:58:13.184280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.971 [2024-07-15 18:58:13.241857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.118  Copying: 227/512 [MB] (227 MBps) Copying: 455/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 227 MBps) 00:07:49.118 00:07:49.118 18:58:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:49.118 18:58:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:49.118 18:58:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:49.118 18:58:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.118 [2024-07-15 18:58:16.182704] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:49.118 [2024-07-15 18:58:16.182829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:07:49.118 { 00:07:49.118 "subsystems": [ 00:07:49.118 { 00:07:49.118 "subsystem": "bdev", 00:07:49.118 "config": [ 00:07:49.118 { 00:07:49.118 "params": { 00:07:49.118 "block_size": 512, 00:07:49.118 "num_blocks": 1048576, 00:07:49.118 "name": "malloc0" 00:07:49.118 }, 00:07:49.118 "method": "bdev_malloc_create" 00:07:49.118 }, 00:07:49.118 { 00:07:49.118 "params": { 00:07:49.118 "filename": "/dev/zram1", 00:07:49.118 "name": "uring0" 00:07:49.118 }, 00:07:49.118 "method": "bdev_uring_create" 00:07:49.118 }, 00:07:49.118 { 00:07:49.118 "method": "bdev_wait_for_examine" 00:07:49.118 } 00:07:49.118 ] 00:07:49.118 } 00:07:49.118 ] 00:07:49.118 } 00:07:49.118 [2024-07-15 18:58:16.321703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.118 [2024-07-15 18:58:16.397728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.376 [2024-07-15 18:58:16.450917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.828  Copying: 189/512 [MB] (189 MBps) Copying: 364/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 183 MBps) 00:07:52.828 00:07:52.828 18:58:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:52.828 18:58:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ roj3oao0vdx5oqwyvnrro569cswrvt3smlvywqq671lq1dl3h6z5beksdxl4b3frc564vi7iajz7rsj9q3dygcfrqn3y2rv6p3mel4hec7sar4xj57bv5ype8fm79tfykww1d0e772iqvo2fda2kalepr2nqr7eo3f9min6dp4ekxk54u9bwr7v6dtkvx19upfqei5bbde608n82wuve0dl1m8zrexhps6dwy2bn7lul3bw2iclt4xhdcuaptbnebsfmajswu51gbz5yxpqidqnaoc4gd6lsnbk7h3r5kqbu2s3vu5db1j1xdumwl5f7c6lmahk11q63ax1i8dh3mieolxrz0f0tcrggawh3778ce5sjx7hdlaiarsoosa6eripfmnxnoz54vob4onbgtk6ujd729lndco64ppf5sg1jghzesqp1982oc248yjxymm1iu6p4qy4qupzyla71stfull987bf26fnnbzx08br6rkb441gvtfsc37pzmc9eesysxztai00d4l821n14399wz3mgtqxvv6dvw53zio4dhb95esvxnsctblzk6065sz59bcp7ylru3twujr03ihg4jiqh3z9z3xk3cy3tamla7um55g6mscdzzoa3qmusw65lq77250sl9zo5rz33yw3ol11paei5413rbw9wg3vlskg0rkzi6t6l356n5u4x8gq2q4anee6uk6gkaoeq6jnib33kdbcmlq3ye642dw3giuqeoe4xww473ldvd16dttk5u9froidvisupdzghr8g5e1e4i8ou4wjssh653eh78iism9bei8052ig4yg34hpd0ue0kzz11b2gaq1shwil5h2x96ovmy5za3284rf3xrfmheaxlmcnbi9p0rxpmzyn3609qexpayt07gccs0cqggo687np2cjokckfouvmxb2pr0ntozw7i9pzki6ec2h7n0493s4fowb8nj0k81ci84y0o9juv570romya2m88l1jb1qzpb31jcfw5jzkv == \r\o\j\3\o\a\o\0\v\d\x\5\o\q\w\y\v\n\r\r\o\5\6\9\c\s\w\r\v\t\3\s\m\l\v\y\w\q\q\6\7\1\l\q\1\d\l\3\h\6\z\5\b\e\k\s\d\x\l\4\b\3\f\r\c\5\6\4\v\i\7\i\a\j\z\7\r\s\j\9\q\3\d\y\g\c\f\r\q\n\3\y\2\r\v\6\p\3\m\e\l\4\h\e\c\7\s\a\r\4\x\j\5\7\b\v\5\y\p\e\8\f\m\7\9\t\f\y\k\w\w\1\d\0\e\7\7\2\i\q\v\o\2\f\d\a\2\k\a\l\e\p\r\2\n\q\r\7\e\o\3\f\9\m\i\n\6\d\p\4\e\k\x\k\5\4\u\9\b\w\r\7\v\6\d\t\k\v\x\1\9\u\p\f\q\e\i\5\b\b\d\e\6\0\8\n\8\2\w\u\v\e\0\d\l\1\m\8\z\r\e\x\h\p\s\6\d\w\y\2\b\n\7\l\u\l\3\b\w\2\i\c\l\t\4\x\h\d\c\u\a\p\t\b\n\e\b\s\f\m\a\j\s\w\u\5\1\g\b\z\5\y\x\p\q\i\d\q\n\a\o\c\4\g\d\6\l\s\n\b\k\7\h\3\r\5\k\q\b\u\2\s\3\v\u\5\d\b\1\j\1\x\d\u\m\w\l\5\f\7\c\6\l\m\a\h\k\1\1\q\6\3\a\x\1\i\8\d\h\3\m\i\e\o\l\x\r\z\0\f\0\t\c\r\g\g\a\w\h\3\7\7\8\c\e\5\s\j\x\7\h\d\l\a\i\a\r\s\o\o\s\a\6\e\r\i\p\f\m\n\x\n\o\z\5\4\v\o\b\4\o\n\b\g\t\k\6\u\j\d\7\2\9\l\n\d\c\o\6\4\p\p\f\5\s\g\1\j\g\h\z\e\s\q\p\1\9\8\2\o\c\2\4\8\y\j\x\y\m\m\1\i\u\6\p\4\q\y\4\q\u\p\z\y\l\a\7\1\s\t\f\u\l\l\9\8\7\b\f\2\6\f\n\n\b\z\x\0\8\b\r\6\r\k\b\4\4\1\g\v\t\f\s\c\3\7\p\z\m\c\9\e\e\s\y\s\x\z\t\a\i\0\0\d\4\l\8\2\1\n\1\4\3\9\9\w\z\3\m\g\t\q\x\v\v\6\d\v\w\5\3\z\i\o\4\d\h\b\9\5\e\s\v\x\n\s\c\t\b\l\z\k\6\0\6\5\s\z\5\9\b\c\p\7\y\l\r\u\3\t\w\u\j\r\0\3\i\h\g\4\j\i\q\h\3\z\9\z\3\x\k\3\c\y\3\t\a\m\l\a\7\u\m\5\5\g\6\m\s\c\d\z\z\o\a\3\q\m\u\s\w\6\5\l\q\7\7\2\5\0\s\l\9\z\o\5\r\z\3\3\y\w\3\o\l\1\1\p\a\e\i\5\4\1\3\r\b\w\9\w\g\3\v\l\s\k\g\0\r\k\z\i\6\t\6\l\3\5\6\n\5\u\4\x\8\g\q\2\q\4\a\n\e\e\6\u\k\6\g\k\a\o\e\q\6\j\n\i\b\3\3\k\d\b\c\m\l\q\3\y\e\6\4\2\d\w\3\g\i\u\q\e\o\e\4\x\w\w\4\7\3\l\d\v\d\1\6\d\t\t\k\5\u\9\f\r\o\i\d\v\i\s\u\p\d\z\g\h\r\8\g\5\e\1\e\4\i\8\o\u\4\w\j\s\s\h\6\5\3\e\h\7\8\i\i\s\m\9\b\e\i\8\0\5\2\i\g\4\y\g\3\4\h\p\d\0\u\e\0\k\z\z\1\1\b\2\g\a\q\1\s\h\w\i\l\5\h\2\x\9\6\o\v\m\y\5\z\a\3\2\8\4\r\f\3\x\r\f\m\h\e\a\x\l\m\c\n\b\i\9\p\0\r\x\p\m\z\y\n\3\6\0\9\q\e\x\p\a\y\t\0\7\g\c\c\s\0\c\q\g\g\o\6\8\7\n\p\2\c\j\o\k\c\k\f\o\u\v\m\x\b\2\p\r\0\n\t\o\z\w\7\i\9\p\z\k\i\6\e\c\2\h\7\n\0\4\9\3\s\4\f\o\w\b\8\n\j\0\k\8\1\c\i\8\4\y\0\o\9\j\u\v\5\7\0\r\o\m\y\a\2\m\8\8\l\1\j\b\1\q\z\p\b\3\1\j\c\f\w\5\j\z\k\v ]] 00:07:52.828 18:58:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:52.829 18:58:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ roj3oao0vdx5oqwyvnrro569cswrvt3smlvywqq671lq1dl3h6z5beksdxl4b3frc564vi7iajz7rsj9q3dygcfrqn3y2rv6p3mel4hec7sar4xj57bv5ype8fm79tfykww1d0e772iqvo2fda2kalepr2nqr7eo3f9min6dp4ekxk54u9bwr7v6dtkvx19upfqei5bbde608n82wuve0dl1m8zrexhps6dwy2bn7lul3bw2iclt4xhdcuaptbnebsfmajswu51gbz5yxpqidqnaoc4gd6lsnbk7h3r5kqbu2s3vu5db1j1xdumwl5f7c6lmahk11q63ax1i8dh3mieolxrz0f0tcrggawh3778ce5sjx7hdlaiarsoosa6eripfmnxnoz54vob4onbgtk6ujd729lndco64ppf5sg1jghzesqp1982oc248yjxymm1iu6p4qy4qupzyla71stfull987bf26fnnbzx08br6rkb441gvtfsc37pzmc9eesysxztai00d4l821n14399wz3mgtqxvv6dvw53zio4dhb95esvxnsctblzk6065sz59bcp7ylru3twujr03ihg4jiqh3z9z3xk3cy3tamla7um55g6mscdzzoa3qmusw65lq77250sl9zo5rz33yw3ol11paei5413rbw9wg3vlskg0rkzi6t6l356n5u4x8gq2q4anee6uk6gkaoeq6jnib33kdbcmlq3ye642dw3giuqeoe4xww473ldvd16dttk5u9froidvisupdzghr8g5e1e4i8ou4wjssh653eh78iism9bei8052ig4yg34hpd0ue0kzz11b2gaq1shwil5h2x96ovmy5za3284rf3xrfmheaxlmcnbi9p0rxpmzyn3609qexpayt07gccs0cqggo687np2cjokckfouvmxb2pr0ntozw7i9pzki6ec2h7n0493s4fowb8nj0k81ci84y0o9juv570romya2m88l1jb1qzpb31jcfw5jzkv == \r\o\j\3\o\a\o\0\v\d\x\5\o\q\w\y\v\n\r\r\o\5\6\9\c\s\w\r\v\t\3\s\m\l\v\y\w\q\q\6\7\1\l\q\1\d\l\3\h\6\z\5\b\e\k\s\d\x\l\4\b\3\f\r\c\5\6\4\v\i\7\i\a\j\z\7\r\s\j\9\q\3\d\y\g\c\f\r\q\n\3\y\2\r\v\6\p\3\m\e\l\4\h\e\c\7\s\a\r\4\x\j\5\7\b\v\5\y\p\e\8\f\m\7\9\t\f\y\k\w\w\1\d\0\e\7\7\2\i\q\v\o\2\f\d\a\2\k\a\l\e\p\r\2\n\q\r\7\e\o\3\f\9\m\i\n\6\d\p\4\e\k\x\k\5\4\u\9\b\w\r\7\v\6\d\t\k\v\x\1\9\u\p\f\q\e\i\5\b\b\d\e\6\0\8\n\8\2\w\u\v\e\0\d\l\1\m\8\z\r\e\x\h\p\s\6\d\w\y\2\b\n\7\l\u\l\3\b\w\2\i\c\l\t\4\x\h\d\c\u\a\p\t\b\n\e\b\s\f\m\a\j\s\w\u\5\1\g\b\z\5\y\x\p\q\i\d\q\n\a\o\c\4\g\d\6\l\s\n\b\k\7\h\3\r\5\k\q\b\u\2\s\3\v\u\5\d\b\1\j\1\x\d\u\m\w\l\5\f\7\c\6\l\m\a\h\k\1\1\q\6\3\a\x\1\i\8\d\h\3\m\i\e\o\l\x\r\z\0\f\0\t\c\r\g\g\a\w\h\3\7\7\8\c\e\5\s\j\x\7\h\d\l\a\i\a\r\s\o\o\s\a\6\e\r\i\p\f\m\n\x\n\o\z\5\4\v\o\b\4\o\n\b\g\t\k\6\u\j\d\7\2\9\l\n\d\c\o\6\4\p\p\f\5\s\g\1\j\g\h\z\e\s\q\p\1\9\8\2\o\c\2\4\8\y\j\x\y\m\m\1\i\u\6\p\4\q\y\4\q\u\p\z\y\l\a\7\1\s\t\f\u\l\l\9\8\7\b\f\2\6\f\n\n\b\z\x\0\8\b\r\6\r\k\b\4\4\1\g\v\t\f\s\c\3\7\p\z\m\c\9\e\e\s\y\s\x\z\t\a\i\0\0\d\4\l\8\2\1\n\1\4\3\9\9\w\z\3\m\g\t\q\x\v\v\6\d\v\w\5\3\z\i\o\4\d\h\b\9\5\e\s\v\x\n\s\c\t\b\l\z\k\6\0\6\5\s\z\5\9\b\c\p\7\y\l\r\u\3\t\w\u\j\r\0\3\i\h\g\4\j\i\q\h\3\z\9\z\3\x\k\3\c\y\3\t\a\m\l\a\7\u\m\5\5\g\6\m\s\c\d\z\z\o\a\3\q\m\u\s\w\6\5\l\q\7\7\2\5\0\s\l\9\z\o\5\r\z\3\3\y\w\3\o\l\1\1\p\a\e\i\5\4\1\3\r\b\w\9\w\g\3\v\l\s\k\g\0\r\k\z\i\6\t\6\l\3\5\6\n\5\u\4\x\8\g\q\2\q\4\a\n\e\e\6\u\k\6\g\k\a\o\e\q\6\j\n\i\b\3\3\k\d\b\c\m\l\q\3\y\e\6\4\2\d\w\3\g\i\u\q\e\o\e\4\x\w\w\4\7\3\l\d\v\d\1\6\d\t\t\k\5\u\9\f\r\o\i\d\v\i\s\u\p\d\z\g\h\r\8\g\5\e\1\e\4\i\8\o\u\4\w\j\s\s\h\6\5\3\e\h\7\8\i\i\s\m\9\b\e\i\8\0\5\2\i\g\4\y\g\3\4\h\p\d\0\u\e\0\k\z\z\1\1\b\2\g\a\q\1\s\h\w\i\l\5\h\2\x\9\6\o\v\m\y\5\z\a\3\2\8\4\r\f\3\x\r\f\m\h\e\a\x\l\m\c\n\b\i\9\p\0\r\x\p\m\z\y\n\3\6\0\9\q\e\x\p\a\y\t\0\7\g\c\c\s\0\c\q\g\g\o\6\8\7\n\p\2\c\j\o\k\c\k\f\o\u\v\m\x\b\2\p\r\0\n\t\o\z\w\7\i\9\p\z\k\i\6\e\c\2\h\7\n\0\4\9\3\s\4\f\o\w\b\8\n\j\0\k\8\1\c\i\8\4\y\0\o\9\j\u\v\5\7\0\r\o\m\y\a\2\m\8\8\l\1\j\b\1\q\z\p\b\3\1\j\c\f\w\5\j\z\k\v ]] 00:07:52.829 18:58:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:53.088 18:58:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:53.088 18:58:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:53.088 18:58:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:53.088 18:58:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:53.088 [2024-07-15 18:58:20.300127] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:53.088 [2024-07-15 18:58:20.300234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64205 ] 00:07:53.088 { 00:07:53.088 "subsystems": [ 00:07:53.088 { 00:07:53.088 "subsystem": "bdev", 00:07:53.088 "config": [ 00:07:53.088 { 00:07:53.088 "params": { 00:07:53.088 "block_size": 512, 00:07:53.088 "num_blocks": 1048576, 00:07:53.088 "name": "malloc0" 00:07:53.088 }, 00:07:53.088 "method": "bdev_malloc_create" 00:07:53.088 }, 00:07:53.088 { 00:07:53.088 "params": { 00:07:53.088 "filename": "/dev/zram1", 00:07:53.088 "name": "uring0" 00:07:53.088 }, 00:07:53.088 "method": "bdev_uring_create" 00:07:53.088 }, 00:07:53.088 { 00:07:53.088 "method": "bdev_wait_for_examine" 00:07:53.088 } 00:07:53.088 ] 00:07:53.088 } 00:07:53.088 ] 00:07:53.088 } 00:07:53.347 [2024-07-15 18:58:20.432754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.347 [2024-07-15 18:58:20.531808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.347 [2024-07-15 18:58:20.587904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.419  Copying: 148/512 [MB] (148 MBps) Copying: 306/512 [MB] (157 MBps) Copying: 453/512 [MB] (147 MBps) Copying: 512/512 [MB] (average 150 MBps) 00:07:57.419 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:57.419 18:58:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.419 [2024-07-15 18:58:24.681009] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:57.419 [2024-07-15 18:58:24.681125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64267 ] 00:07:57.419 { 00:07:57.419 "subsystems": [ 00:07:57.419 { 00:07:57.419 "subsystem": "bdev", 00:07:57.419 "config": [ 00:07:57.419 { 00:07:57.419 "params": { 00:07:57.419 "block_size": 512, 00:07:57.419 "num_blocks": 1048576, 00:07:57.419 "name": "malloc0" 00:07:57.419 }, 00:07:57.419 "method": "bdev_malloc_create" 00:07:57.419 }, 00:07:57.419 { 00:07:57.419 "params": { 00:07:57.419 "filename": "/dev/zram1", 00:07:57.419 "name": "uring0" 00:07:57.419 }, 00:07:57.419 "method": "bdev_uring_create" 00:07:57.419 }, 00:07:57.419 { 00:07:57.419 "params": { 00:07:57.419 "name": "uring0" 00:07:57.419 }, 00:07:57.419 "method": "bdev_uring_delete" 00:07:57.419 }, 00:07:57.419 { 00:07:57.419 "method": "bdev_wait_for_examine" 00:07:57.419 } 00:07:57.419 ] 00:07:57.419 } 00:07:57.419 ] 00:07:57.419 } 00:07:57.678 [2024-07-15 18:58:24.816723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.678 [2024-07-15 18:58:24.921144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.937 [2024-07-15 18:58:24.975918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.504  Copying: 0/0 [B] (average 0 Bps) 00:07:58.504 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.504 18:58:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:58.504 [2024-07-15 18:58:25.634903] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:07:58.504 [2024-07-15 18:58:25.634993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64291 ] 00:07:58.505 { 00:07:58.505 "subsystems": [ 00:07:58.505 { 00:07:58.505 "subsystem": "bdev", 00:07:58.505 "config": [ 00:07:58.505 { 00:07:58.505 "params": { 00:07:58.505 "block_size": 512, 00:07:58.505 "num_blocks": 1048576, 00:07:58.505 "name": "malloc0" 00:07:58.505 }, 00:07:58.505 "method": "bdev_malloc_create" 00:07:58.505 }, 00:07:58.505 { 00:07:58.505 "params": { 00:07:58.505 "filename": "/dev/zram1", 00:07:58.505 "name": "uring0" 00:07:58.505 }, 00:07:58.505 "method": "bdev_uring_create" 00:07:58.505 }, 00:07:58.505 { 00:07:58.505 "params": { 00:07:58.505 "name": "uring0" 00:07:58.505 }, 00:07:58.505 "method": "bdev_uring_delete" 00:07:58.505 }, 00:07:58.505 { 00:07:58.505 "method": "bdev_wait_for_examine" 00:07:58.505 } 00:07:58.505 ] 00:07:58.505 } 00:07:58.505 ] 00:07:58.505 } 00:07:58.505 [2024-07-15 18:58:25.773582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.764 [2024-07-15 18:58:25.880437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.764 [2024-07-15 18:58:25.935825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.022 [2024-07-15 18:58:26.137420] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:59.022 [2024-07-15 18:58:26.137471] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:59.022 [2024-07-15 18:58:26.137483] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:59.022 [2024-07-15 18:58:26.137494] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.281 [2024-07-15 18:58:26.442304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:59.281 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:59.540 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:59.540 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:59.540 00:07:59.540 real 0m15.472s 00:07:59.540 user 0m10.478s 00:07:59.540 sys 0m12.495s 00:07:59.540 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.540 18:58:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.540 ************************************ 00:07:59.540 END TEST dd_uring_copy 00:07:59.540 ************************************ 00:07:59.799 18:58:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:59.799 00:07:59.799 real 0m15.617s 00:07:59.799 user 0m10.539s 00:07:59.799 sys 0m12.580s 00:07:59.799 18:58:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.799 18:58:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:59.799 ************************************ 00:07:59.799 END TEST spdk_dd_uring 00:07:59.799 ************************************ 00:07:59.799 18:58:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:59.799 18:58:26 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:59.799 18:58:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.799 18:58:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.799 18:58:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:59.799 ************************************ 00:07:59.799 START TEST spdk_dd_sparse 00:07:59.799 ************************************ 00:07:59.799 18:58:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:59.799 * Looking for test storage... 00:07:59.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:59.799 18:58:26 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:59.799 1+0 records in 00:07:59.799 1+0 records out 00:07:59.799 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00542231 s, 774 MB/s 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:59.799 1+0 records in 00:07:59.799 1+0 records out 00:07:59.799 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00656042 s, 639 MB/s 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:59.799 1+0 records in 00:07:59.799 1+0 records out 00:07:59.799 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00708285 s, 592 MB/s 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:59.799 ************************************ 00:07:59.799 START TEST dd_sparse_file_to_file 00:07:59.799 ************************************ 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:59.799 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:00.058 [2024-07-15 18:58:27.106390] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:00.058 [2024-07-15 18:58:27.106485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64382 ] 00:08:00.058 { 00:08:00.058 "subsystems": [ 00:08:00.058 { 00:08:00.058 "subsystem": "bdev", 00:08:00.058 "config": [ 00:08:00.058 { 00:08:00.058 "params": { 00:08:00.058 "block_size": 4096, 00:08:00.058 "filename": "dd_sparse_aio_disk", 00:08:00.058 "name": "dd_aio" 00:08:00.058 }, 00:08:00.058 "method": "bdev_aio_create" 00:08:00.058 }, 00:08:00.058 { 00:08:00.058 "params": { 00:08:00.058 "lvs_name": "dd_lvstore", 00:08:00.058 "bdev_name": "dd_aio" 00:08:00.058 }, 00:08:00.058 "method": "bdev_lvol_create_lvstore" 00:08:00.058 }, 00:08:00.058 { 00:08:00.058 "method": "bdev_wait_for_examine" 00:08:00.058 } 00:08:00.058 ] 00:08:00.058 } 00:08:00.058 ] 00:08:00.058 } 00:08:00.058 [2024-07-15 18:58:27.247875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.317 [2024-07-15 18:58:27.372231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.317 [2024-07-15 18:58:27.430843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.576  Copying: 12/36 [MB] (average 800 MBps) 00:08:00.576 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:00.576 00:08:00.576 real 0m0.740s 00:08:00.576 user 0m0.473s 00:08:00.576 sys 0m0.367s 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:00.576 ************************************ 00:08:00.576 END TEST dd_sparse_file_to_file 00:08:00.576 ************************************ 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:00.576 ************************************ 00:08:00.576 START TEST dd_sparse_file_to_bdev 00:08:00.576 ************************************ 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:00.576 18:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 [2024-07-15 18:58:27.886065] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:00.835 [2024-07-15 18:58:27.886156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64430 ] 00:08:00.835 { 00:08:00.835 "subsystems": [ 00:08:00.835 { 00:08:00.835 "subsystem": "bdev", 00:08:00.835 "config": [ 00:08:00.835 { 00:08:00.835 "params": { 00:08:00.835 "block_size": 4096, 00:08:00.835 "filename": "dd_sparse_aio_disk", 00:08:00.835 "name": "dd_aio" 00:08:00.835 }, 00:08:00.835 "method": "bdev_aio_create" 00:08:00.835 }, 00:08:00.835 { 00:08:00.835 "params": { 00:08:00.835 "lvs_name": "dd_lvstore", 00:08:00.835 "lvol_name": "dd_lvol", 00:08:00.835 "size_in_mib": 36, 00:08:00.835 "thin_provision": true 00:08:00.835 }, 00:08:00.835 "method": "bdev_lvol_create" 00:08:00.835 }, 00:08:00.835 { 00:08:00.835 "method": "bdev_wait_for_examine" 00:08:00.835 } 00:08:00.835 ] 00:08:00.835 } 00:08:00.835 ] 00:08:00.835 } 00:08:00.835 [2024-07-15 18:58:28.020098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.094 [2024-07-15 18:58:28.133043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.094 [2024-07-15 18:58:28.191206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.353  Copying: 12/36 [MB] (average 500 MBps) 00:08:01.353 00:08:01.353 00:08:01.353 real 0m0.724s 00:08:01.353 user 0m0.484s 00:08:01.353 sys 0m0.355s 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.353 ************************************ 00:08:01.353 END TEST dd_sparse_file_to_bdev 00:08:01.353 ************************************ 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:01.353 ************************************ 00:08:01.353 START TEST dd_sparse_bdev_to_file 00:08:01.353 ************************************ 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:01.353 18:58:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:01.612 [2024-07-15 18:58:28.671710] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:01.612 [2024-07-15 18:58:28.671810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64463 ] 00:08:01.612 { 00:08:01.612 "subsystems": [ 00:08:01.612 { 00:08:01.612 "subsystem": "bdev", 00:08:01.612 "config": [ 00:08:01.612 { 00:08:01.612 "params": { 00:08:01.612 "block_size": 4096, 00:08:01.612 "filename": "dd_sparse_aio_disk", 00:08:01.612 "name": "dd_aio" 00:08:01.612 }, 00:08:01.612 "method": "bdev_aio_create" 00:08:01.612 }, 00:08:01.612 { 00:08:01.612 "method": "bdev_wait_for_examine" 00:08:01.612 } 00:08:01.612 ] 00:08:01.612 } 00:08:01.612 ] 00:08:01.612 } 00:08:01.612 [2024-07-15 18:58:28.809895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.871 [2024-07-15 18:58:28.929325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.871 [2024-07-15 18:58:28.984898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.156  Copying: 12/36 [MB] (average 857 MBps) 00:08:02.156 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:02.156 00:08:02.156 real 0m0.718s 00:08:02.156 user 0m0.489s 00:08:02.156 sys 0m0.334s 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.156 ************************************ 00:08:02.156 END TEST dd_sparse_bdev_to_file 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.156 ************************************ 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:02.156 00:08:02.156 real 0m2.481s 00:08:02.156 user 0m1.546s 00:08:02.156 sys 0m1.249s 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.156 18:58:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:02.156 ************************************ 00:08:02.156 END TEST spdk_dd_sparse 00:08:02.156 ************************************ 00:08:02.156 18:58:29 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:02.156 18:58:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:02.156 18:58:29 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.156 18:58:29 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.156 18:58:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 ************************************ 00:08:02.415 START TEST spdk_dd_negative 00:08:02.415 ************************************ 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:02.415 * Looking for test storage... 00:08:02.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.415 ************************************ 00:08:02.415 START TEST dd_invalid_arguments 00:08:02.415 ************************************ 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.415 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:02.415 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:02.415 00:08:02.415 CPU options: 00:08:02.415 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:02.415 (like [0,1,10]) 00:08:02.415 --lcores lcore to CPU mapping list. The list is in the format: 00:08:02.415 [<,lcores[@CPUs]>...] 00:08:02.415 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:02.415 Within the group, '-' is used for range separator, 00:08:02.415 ',' is used for single number separator. 00:08:02.415 '( )' can be omitted for single element group, 00:08:02.415 '@' can be omitted if cpus and lcores have the same value 00:08:02.415 --disable-cpumask-locks Disable CPU core lock files. 00:08:02.415 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:02.415 pollers in the app support interrupt mode) 00:08:02.416 -p, --main-core main (primary) core for DPDK 00:08:02.416 00:08:02.416 Configuration options: 00:08:02.416 -c, --config, --json JSON config file 00:08:02.416 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:02.416 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:02.416 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:02.416 --rpcs-allowed comma-separated list of permitted RPCS 00:08:02.416 --json-ignore-init-errors don't exit on invalid config entry 00:08:02.416 00:08:02.416 Memory options: 00:08:02.416 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:02.416 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:02.416 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:02.416 -R, --huge-unlink unlink huge files after initialization 00:08:02.416 -n, --mem-channels number of memory channels used for DPDK 00:08:02.416 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:02.416 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:02.416 --no-huge run without using hugepages 00:08:02.416 -i, --shm-id shared memory ID (optional) 00:08:02.416 -g, --single-file-segments force creating just one hugetlbfs file 00:08:02.416 00:08:02.416 PCI options: 00:08:02.416 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:02.416 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:02.416 -u, --no-pci disable PCI access 00:08:02.416 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:02.416 00:08:02.416 Log options: 00:08:02.416 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:02.416 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:02.416 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:02.416 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:02.416 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:02.416 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:02.416 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:02.416 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:02.416 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:02.416 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:02.416 virtio_vfio_user, vmd) 00:08:02.416 --silence-noticelog disable notice level logging to stderr 00:08:02.416 00:08:02.416 Trace options: 00:08:02.416 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:02.416 setting 0 to disable trace (default 32768) 00:08:02.416 Tracepoints vary in size and can use more than one trace entry. 00:08:02.416 -e, --tpoint-group [:] 00:08:02.416 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:02.416 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:02.416 [2024-07-15 18:58:29.624258] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:02.416 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:02.416 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:02.416 a tracepoint group. First tpoint inside a group can be enabled by 00:08:02.416 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:02.416 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:02.416 in /include/spdk_internal/trace_defs.h 00:08:02.416 00:08:02.416 Other options: 00:08:02.416 -h, --help show this usage 00:08:02.416 -v, --version print SPDK version 00:08:02.416 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:02.416 --env-context Opaque context for use of the env implementation 00:08:02.416 00:08:02.416 Application specific: 00:08:02.416 [--------- DD Options ---------] 00:08:02.416 --if Input file. Must specify either --if or --ib. 00:08:02.416 --ib Input bdev. Must specifier either --if or --ib 00:08:02.416 --of Output file. Must specify either --of or --ob. 00:08:02.416 --ob Output bdev. Must specify either --of or --ob. 00:08:02.416 --iflag Input file flags. 00:08:02.416 --oflag Output file flags. 00:08:02.416 --bs I/O unit size (default: 4096) 00:08:02.416 --qd Queue depth (default: 2) 00:08:02.416 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:02.416 --skip Skip this many I/O units at start of input. (default: 0) 00:08:02.416 --seek Skip this many I/O units at start of output. (default: 0) 00:08:02.416 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:02.416 --sparse Enable hole skipping in input target 00:08:02.416 Available iflag and oflag values: 00:08:02.416 append - append mode 00:08:02.416 direct - use direct I/O for data 00:08:02.416 directory - fail unless a directory 00:08:02.416 dsync - use synchronized I/O for data 00:08:02.416 noatime - do not update access time 00:08:02.416 noctty - do not assign controlling terminal from file 00:08:02.416 nofollow - do not follow symlinks 00:08:02.416 nonblock - use non-blocking I/O 00:08:02.416 sync - use synchronized I/O for data and metadata 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.416 00:08:02.416 real 0m0.098s 00:08:02.416 user 0m0.064s 00:08:02.416 sys 0m0.030s 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:02.416 ************************************ 00:08:02.416 END TEST dd_invalid_arguments 00:08:02.416 ************************************ 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.416 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.676 ************************************ 00:08:02.676 START TEST dd_double_input 00:08:02.676 ************************************ 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:02.676 [2024-07-15 18:58:29.756948] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.676 00:08:02.676 real 0m0.065s 00:08:02.676 user 0m0.043s 00:08:02.676 sys 0m0.019s 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:02.676 ************************************ 00:08:02.676 END TEST dd_double_input 00:08:02.676 ************************************ 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.676 ************************************ 00:08:02.676 START TEST dd_double_output 00:08:02.676 ************************************ 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:02.676 [2024-07-15 18:58:29.886726] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.676 00:08:02.676 real 0m0.076s 00:08:02.676 user 0m0.050s 00:08:02.676 sys 0m0.024s 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:02.676 ************************************ 00:08:02.676 END TEST dd_double_output 00:08:02.676 ************************************ 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.676 ************************************ 00:08:02.676 START TEST dd_no_input 00:08:02.676 ************************************ 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.676 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.935 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.935 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.935 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.935 18:58:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:02.935 [2024-07-15 18:58:30.016869] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.935 00:08:02.935 real 0m0.078s 00:08:02.935 user 0m0.044s 00:08:02.935 sys 0m0.031s 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 ************************************ 00:08:02.935 END TEST dd_no_input 00:08:02.935 ************************************ 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 ************************************ 00:08:02.935 START TEST dd_no_output 00:08:02.935 ************************************ 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:02.935 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.936 [2024-07-15 18:58:30.150859] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.936 00:08:02.936 real 0m0.079s 00:08:02.936 user 0m0.046s 00:08:02.936 sys 0m0.029s 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:02.936 ************************************ 00:08:02.936 END TEST dd_no_output 00:08:02.936 ************************************ 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:02.936 ************************************ 00:08:02.936 START TEST dd_wrong_blocksize 00:08:02.936 ************************************ 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:02.936 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:03.195 [2024-07-15 18:58:30.273961] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.195 00:08:03.195 real 0m0.067s 00:08:03.195 user 0m0.036s 00:08:03.195 sys 0m0.029s 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 END TEST dd_wrong_blocksize 00:08:03.195 ************************************ 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 START TEST dd_smaller_blocksize 00:08:03.195 ************************************ 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.195 18:58:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:03.195 [2024-07-15 18:58:30.407628] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:03.195 [2024-07-15 18:58:30.407757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64687 ] 00:08:03.454 [2024-07-15 18:58:30.549457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.454 [2024-07-15 18:58:30.672468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.454 [2024-07-15 18:58:30.729345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.022 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:04.022 [2024-07-15 18:58:31.045866] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:04.022 [2024-07-15 18:58:31.045966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.022 [2024-07-15 18:58:31.162393] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.022 00:08:04.022 real 0m0.918s 00:08:04.022 user 0m0.427s 00:08:04.022 sys 0m0.382s 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:04.022 ************************************ 00:08:04.022 END TEST dd_smaller_blocksize 00:08:04.022 ************************************ 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.022 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.281 ************************************ 00:08:04.281 START TEST dd_invalid_count 00:08:04.281 ************************************ 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:04.281 [2024-07-15 18:58:31.378626] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.281 00:08:04.281 real 0m0.080s 00:08:04.281 user 0m0.044s 00:08:04.281 sys 0m0.034s 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.281 ************************************ 00:08:04.281 END TEST dd_invalid_count 00:08:04.281 ************************************ 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.281 ************************************ 00:08:04.281 START TEST dd_invalid_oflag 00:08:04.281 ************************************ 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:04.281 [2024-07-15 18:58:31.509745] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.281 00:08:04.281 real 0m0.075s 00:08:04.281 user 0m0.043s 00:08:04.281 sys 0m0.030s 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.281 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:04.281 ************************************ 00:08:04.281 END TEST dd_invalid_oflag 00:08:04.281 ************************************ 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.541 ************************************ 00:08:04.541 START TEST dd_invalid_iflag 00:08:04.541 ************************************ 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:04.541 [2024-07-15 18:58:31.646738] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.541 00:08:04.541 real 0m0.081s 00:08:04.541 user 0m0.050s 00:08:04.541 sys 0m0.029s 00:08:04.541 ************************************ 00:08:04.541 END TEST dd_invalid_iflag 00:08:04.541 ************************************ 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.541 ************************************ 00:08:04.541 START TEST dd_unknown_flag 00:08:04.541 ************************************ 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.541 18:58:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:04.541 [2024-07-15 18:58:31.784300] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:04.541 [2024-07-15 18:58:31.784433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64779 ] 00:08:04.800 [2024-07-15 18:58:31.920708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.800 [2024-07-15 18:58:32.040333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.059 [2024-07-15 18:58:32.096966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.059 [2024-07-15 18:58:32.132520] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:05.059 [2024-07-15 18:58:32.132577] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.059 [2024-07-15 18:58:32.132636] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:05.059 [2024-07-15 18:58:32.132651] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.059 [2024-07-15 18:58:32.132901] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:05.059 [2024-07-15 18:58:32.132924] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.059 [2024-07-15 18:58:32.132988] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:05.059 [2024-07-15 18:58:32.133002] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:05.059 [2024-07-15 18:58:32.248557] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.319 00:08:05.319 real 0m0.634s 00:08:05.319 user 0m0.366s 00:08:05.319 sys 0m0.171s 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.319 ************************************ 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:05.319 END TEST dd_unknown_flag 00:08:05.319 ************************************ 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.319 ************************************ 00:08:05.319 START TEST dd_invalid_json 00:08:05.319 ************************************ 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.319 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:05.319 [2024-07-15 18:58:32.469198] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:05.319 [2024-07-15 18:58:32.469315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64813 ] 00:08:05.578 [2024-07-15 18:58:32.611667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.578 [2024-07-15 18:58:32.736430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.578 [2024-07-15 18:58:32.736547] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:05.578 [2024-07-15 18:58:32.736570] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:05.578 [2024-07-15 18:58:32.736582] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.578 [2024-07-15 18:58:32.736630] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.578 00:08:05.578 real 0m0.436s 00:08:05.578 user 0m0.251s 00:08:05.578 sys 0m0.083s 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.578 ************************************ 00:08:05.578 END TEST dd_invalid_json 00:08:05.578 ************************************ 00:08:05.578 18:58:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:05.837 00:08:05.837 real 0m3.438s 00:08:05.837 user 0m1.675s 00:08:05.837 sys 0m1.364s 00:08:05.837 ************************************ 00:08:05.837 END TEST spdk_dd_negative 00:08:05.837 ************************************ 00:08:05.837 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.837 18:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 18:58:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:05.837 ************************************ 00:08:05.837 END TEST spdk_dd 00:08:05.837 ************************************ 00:08:05.837 00:08:05.837 real 1m20.406s 00:08:05.837 user 0m52.506s 00:08:05.837 sys 0m34.338s 00:08:05.837 18:58:32 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.837 18:58:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 18:58:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:05.837 18:58:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:05.837 18:58:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.837 18:58:32 -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 18:58:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:33 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:05.837 18:58:33 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:05.837 18:58:33 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:05.837 18:58:33 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:05.837 18:58:33 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:05.837 18:58:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.837 18:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.837 18:58:33 -- common/autotest_common.sh@10 -- # set +x 00:08:05.837 ************************************ 00:08:05.837 START TEST nvmf_tcp 00:08:05.837 ************************************ 00:08:05.837 18:58:33 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:05.837 * Looking for test storage... 00:08:05.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.837 18:58:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.837 18:58:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.837 18:58:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.837 18:58:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 18:58:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 18:58:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 18:58:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:05.837 18:58:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:05.837 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:05.837 18:58:33 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.837 18:58:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.096 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:06.096 18:58:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:06.096 18:58:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.096 18:58:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.096 18:58:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.096 ************************************ 00:08:06.096 START TEST nvmf_host_management 00:08:06.096 ************************************ 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:06.096 * Looking for test storage... 00:08:06.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.096 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.097 Cannot find device "nvmf_init_br" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.097 Cannot find device "nvmf_tgt_br" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.097 Cannot find device "nvmf_tgt_br2" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.097 Cannot find device "nvmf_init_br" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.097 Cannot find device "nvmf_tgt_br" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.097 Cannot find device "nvmf_tgt_br2" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.097 Cannot find device "nvmf_br" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.097 Cannot find device "nvmf_init_if" 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.097 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.356 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:06.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:08:06.615 00:08:06.615 --- 10.0.0.2 ping statistics --- 00:08:06.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.615 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:06.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:06.615 00:08:06.615 --- 10.0.0.3 ping statistics --- 00:08:06.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.615 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:06.615 00:08:06.615 --- 10.0.0.1 ping statistics --- 00:08:06.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.615 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65070 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:06.615 18:58:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65070 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65070 ']' 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.616 18:58:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.616 [2024-07-15 18:58:33.752598] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:06.616 [2024-07-15 18:58:33.752707] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.616 [2024-07-15 18:58:33.895722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.875 [2024-07-15 18:58:34.014587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.875 [2024-07-15 18:58:34.014652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.875 [2024-07-15 18:58:34.014664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.875 [2024-07-15 18:58:34.014673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.875 [2024-07-15 18:58:34.014680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.875 [2024-07-15 18:58:34.014899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.875 [2024-07-15 18:58:34.015801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.875 [2024-07-15 18:58:34.015900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.875 [2024-07-15 18:58:34.015908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.875 [2024-07-15 18:58:34.069885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 [2024-07-15 18:58:34.815549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 Malloc0 00:08:07.813 [2024-07-15 18:58:34.900011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65124 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65124 /var/tmp/bdevperf.sock 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65124 ']' 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:07.813 { 00:08:07.813 "params": { 00:08:07.813 "name": "Nvme$subsystem", 00:08:07.813 "trtype": "$TEST_TRANSPORT", 00:08:07.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.813 "adrfam": "ipv4", 00:08:07.813 "trsvcid": "$NVMF_PORT", 00:08:07.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.813 "hdgst": ${hdgst:-false}, 00:08:07.813 "ddgst": ${ddgst:-false} 00:08:07.813 }, 00:08:07.813 "method": "bdev_nvme_attach_controller" 00:08:07.813 } 00:08:07.813 EOF 00:08:07.813 )") 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:07.813 18:58:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:07.813 "params": { 00:08:07.813 "name": "Nvme0", 00:08:07.813 "trtype": "tcp", 00:08:07.813 "traddr": "10.0.0.2", 00:08:07.813 "adrfam": "ipv4", 00:08:07.813 "trsvcid": "4420", 00:08:07.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:07.813 "hdgst": false, 00:08:07.814 "ddgst": false 00:08:07.814 }, 00:08:07.814 "method": "bdev_nvme_attach_controller" 00:08:07.814 }' 00:08:07.814 [2024-07-15 18:58:35.013745] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:07.814 [2024-07-15 18:58:35.013857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65124 ] 00:08:08.073 [2024-07-15 18:58:35.154422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.073 [2024-07-15 18:58:35.292399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.332 [2024-07-15 18:58:35.364613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.332 Running I/O for 10 seconds... 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.901 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.901 [2024-07-15 18:58:36.102804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.102877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.102903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.102914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.102926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.102945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.102957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.102966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.102978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.102987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.102999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.103982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.103990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.104002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.104012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.104023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.104032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.104043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.104052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.901 [2024-07-15 18:58:36.104063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.901 [2024-07-15 18:58:36.104072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.902 [2024-07-15 18:58:36.104369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.902 [2024-07-15 18:58:36.104379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11de1c0 is same with the state(5) to be set 00:08:08.902 [2024-07-15 18:58:36.104450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11de1c0 was disconnected and freed. reset controller. 00:08:08.902 [2024-07-15 18:58:36.105729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.902 task offset: 113152 on job bdev=Nvme0n1 fails 00:08:08.902 00:08:08.902 Latency(us) 00:08:08.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.902 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:08.902 Job: Nvme0n1 ended in about 0.62 seconds with error 00:08:08.902 Verification LBA range: start 0x0 length 0x400 00:08:08.902 Nvme0n1 : 0.62 1340.66 83.79 103.13 0.00 43130.41 2293.76 43611.23 00:08:08.902 =================================================================================================================== 00:08:08.902 Total : 1340.66 83.79 103.13 0.00 43130.41 2293.76 43611.23 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.902 [2024-07-15 18:58:36.108140] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.902 [2024-07-15 18:58:36.108177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d5ef0 (9): Bad file descriptor 00:08:08.902 [2024-07-15 18:58:36.113339] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.902 18:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65124 00:08:09.836 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65124) - No such process 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:09.836 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:09.837 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:10.095 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:10.095 { 00:08:10.095 "params": { 00:08:10.095 "name": "Nvme$subsystem", 00:08:10.095 "trtype": "$TEST_TRANSPORT", 00:08:10.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.095 "adrfam": "ipv4", 00:08:10.095 "trsvcid": "$NVMF_PORT", 00:08:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.095 "hdgst": ${hdgst:-false}, 00:08:10.095 "ddgst": ${ddgst:-false} 00:08:10.095 }, 00:08:10.095 "method": "bdev_nvme_attach_controller" 00:08:10.095 } 00:08:10.095 EOF 00:08:10.095 )") 00:08:10.095 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:10.095 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:10.095 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:10.095 18:58:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:10.095 "params": { 00:08:10.095 "name": "Nvme0", 00:08:10.095 "trtype": "tcp", 00:08:10.095 "traddr": "10.0.0.2", 00:08:10.095 "adrfam": "ipv4", 00:08:10.095 "trsvcid": "4420", 00:08:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:10.095 "hdgst": false, 00:08:10.095 "ddgst": false 00:08:10.095 }, 00:08:10.095 "method": "bdev_nvme_attach_controller" 00:08:10.095 }' 00:08:10.095 [2024-07-15 18:58:37.186266] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:10.095 [2024-07-15 18:58:37.187265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65162 ] 00:08:10.095 [2024-07-15 18:58:37.330893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.353 [2024-07-15 18:58:37.505869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.353 [2024-07-15 18:58:37.592896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.612 Running I/O for 1 seconds... 00:08:11.550 00:08:11.550 Latency(us) 00:08:11.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.550 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:11.550 Verification LBA range: start 0x0 length 0x400 00:08:11.550 Nvme0n1 : 1.04 1417.17 88.57 0.00 0.00 44277.45 5213.09 40513.16 00:08:11.550 =================================================================================================================== 00:08:11.550 Total : 1417.17 88.57 0.00 0.00 44277.45 5213.09 40513.16 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.809 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.809 rmmod nvme_tcp 00:08:11.809 rmmod nvme_fabrics 00:08:12.068 rmmod nvme_keyring 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65070 ']' 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65070 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65070 ']' 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65070 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65070 00:08:12.068 killing process with pid 65070 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65070' 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65070 00:08:12.068 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65070 00:08:12.327 [2024-07-15 18:58:39.410078] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:12.327 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:12.328 00:08:12.328 real 0m6.343s 00:08:12.328 user 0m24.384s 00:08:12.328 sys 0m1.683s 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.328 ************************************ 00:08:12.328 END TEST nvmf_host_management 00:08:12.328 ************************************ 00:08:12.328 18:58:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.328 18:58:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:12.328 18:58:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.328 18:58:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:12.328 18:58:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.328 18:58:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.328 ************************************ 00:08:12.328 START TEST nvmf_lvol 00:08:12.328 ************************************ 00:08:12.328 18:58:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.328 * Looking for test storage... 00:08:12.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:12.589 Cannot find device "nvmf_tgt_br" 00:08:12.589 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.590 Cannot find device "nvmf_tgt_br2" 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:12.590 Cannot find device "nvmf_tgt_br" 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:12.590 Cannot find device "nvmf_tgt_br2" 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:12.590 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:12.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:12.850 00:08:12.850 --- 10.0.0.2 ping statistics --- 00:08:12.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.850 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:12.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:12.850 00:08:12.850 --- 10.0.0.3 ping statistics --- 00:08:12.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.850 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:12.850 00:08:12.850 --- 10.0.0.1 ping statistics --- 00:08:12.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.850 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.850 18:58:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65380 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65380 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65380 ']' 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.850 18:58:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.850 [2024-07-15 18:58:40.059290] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:12.850 [2024-07-15 18:58:40.059380] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.109 [2024-07-15 18:58:40.201919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.109 [2024-07-15 18:58:40.330466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.109 [2024-07-15 18:58:40.330567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.109 [2024-07-15 18:58:40.330582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.109 [2024-07-15 18:58:40.330592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.109 [2024-07-15 18:58:40.330602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.109 [2024-07-15 18:58:40.331109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.109 [2024-07-15 18:58:40.331414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.109 [2024-07-15 18:58:40.331422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.109 [2024-07-15 18:58:40.389832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.043 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:14.302 [2024-07-15 18:58:41.352999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.302 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.559 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:14.559 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.816 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:14.816 18:58:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:15.074 18:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:15.332 18:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5752057f-b62f-4692-8b67-25aa8379e08f 00:08:15.332 18:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5752057f-b62f-4692-8b67-25aa8379e08f lvol 20 00:08:15.589 18:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6a156967-b1b5-4e57-abbc-4fb80189909b 00:08:15.589 18:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.847 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a156967-b1b5-4e57-abbc-4fb80189909b 00:08:16.105 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:16.363 [2024-07-15 18:58:43.531051] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.363 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.621 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65456 00:08:16.621 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:16.621 18:58:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:17.601 18:58:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6a156967-b1b5-4e57-abbc-4fb80189909b MY_SNAPSHOT 00:08:17.859 18:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2618c88d-2419-4dd4-838c-8281dbd80837 00:08:17.859 18:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6a156967-b1b5-4e57-abbc-4fb80189909b 30 00:08:18.118 18:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2618c88d-2419-4dd4-838c-8281dbd80837 MY_CLONE 00:08:18.376 18:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4738be6d-2ff9-4ead-a463-3e80e428163b 00:08:18.376 18:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4738be6d-2ff9-4ead-a463-3e80e428163b 00:08:18.946 18:58:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65456 00:08:27.068 Initializing NVMe Controllers 00:08:27.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:27.068 Controller IO queue size 128, less than required. 00:08:27.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:27.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:27.068 Initialization complete. Launching workers. 00:08:27.068 ======================================================== 00:08:27.068 Latency(us) 00:08:27.068 Device Information : IOPS MiB/s Average min max 00:08:27.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10026.50 39.17 12768.27 2260.81 60279.08 00:08:27.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9837.00 38.43 13011.81 3607.79 62985.23 00:08:27.068 ======================================================== 00:08:27.068 Total : 19863.50 77.59 12888.87 2260.81 62985.23 00:08:27.068 00:08:27.068 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.068 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6a156967-b1b5-4e57-abbc-4fb80189909b 00:08:27.325 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5752057f-b62f-4692-8b67-25aa8379e08f 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.583 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.842 rmmod nvme_tcp 00:08:27.842 rmmod nvme_fabrics 00:08:27.842 rmmod nvme_keyring 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65380 ']' 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65380 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65380 ']' 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65380 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65380 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65380' 00:08:27.842 killing process with pid 65380 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65380 00:08:27.842 18:58:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65380 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:28.101 ************************************ 00:08:28.101 END TEST nvmf_lvol 00:08:28.101 ************************************ 00:08:28.101 00:08:28.101 real 0m15.790s 00:08:28.101 user 1m5.450s 00:08:28.101 sys 0m4.155s 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.101 18:58:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.101 18:58:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:28.101 18:58:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.101 18:58:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.101 18:58:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.101 ************************************ 00:08:28.101 START TEST nvmf_lvs_grow 00:08:28.101 ************************************ 00:08:28.101 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:28.360 * Looking for test storage... 00:08:28.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.360 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:28.361 Cannot find device "nvmf_tgt_br" 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.361 Cannot find device "nvmf_tgt_br2" 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:28.361 Cannot find device "nvmf_tgt_br" 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:28.361 Cannot find device "nvmf_tgt_br2" 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.361 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:28.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:08:28.619 00:08:28.619 --- 10.0.0.2 ping statistics --- 00:08:28.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.619 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:28.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:28.619 00:08:28.619 --- 10.0.0.3 ping statistics --- 00:08:28.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.619 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:28.619 00:08:28.619 --- 10.0.0.1 ping statistics --- 00:08:28.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.619 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.619 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65790 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65790 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65790 ']' 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.620 18:58:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 [2024-07-15 18:58:55.875158] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:28.620 [2024-07-15 18:58:55.875249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.878 [2024-07-15 18:58:56.012830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.878 [2024-07-15 18:58:56.118143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.878 [2024-07-15 18:58:56.118209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.878 [2024-07-15 18:58:56.118223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.878 [2024-07-15 18:58:56.118234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.878 [2024-07-15 18:58:56.118243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.878 [2024-07-15 18:58:56.118272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.136 [2024-07-15 18:58:56.177340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.703 18:58:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.962 [2024-07-15 18:58:57.149639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.962 ************************************ 00:08:29.962 START TEST lvs_grow_clean 00:08:29.962 ************************************ 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.962 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.220 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:30.220 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.480 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=714182ed-9ed2-45d6-a586-07de10b43015 00:08:30.480 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:30.480 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.739 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.739 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.739 18:58:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 714182ed-9ed2-45d6-a586-07de10b43015 lvol 150 00:08:30.997 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b42daf4-24f7-4dc7-9e69-830a3fc53121 00:08:30.997 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.997 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:31.255 [2024-07-15 18:58:58.376317] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:31.255 [2024-07-15 18:58:58.376428] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:31.255 true 00:08:31.255 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:31.255 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:31.515 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:31.515 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.773 18:58:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b42daf4-24f7-4dc7-9e69-830a3fc53121 00:08:32.031 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.289 [2024-07-15 18:58:59.396984] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.289 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65867 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65867 /var/tmp/bdevperf.sock 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65867 ']' 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.549 18:58:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.549 [2024-07-15 18:58:59.719712] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:32.549 [2024-07-15 18:58:59.719799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65867 ] 00:08:32.807 [2024-07-15 18:58:59.857296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.807 [2024-07-15 18:58:59.972156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.807 [2024-07-15 18:59:00.030477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.740 18:59:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.740 18:59:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:33.740 18:59:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.740 Nvme0n1 00:08:33.740 18:59:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.998 [ 00:08:33.998 { 00:08:33.998 "name": "Nvme0n1", 00:08:33.998 "aliases": [ 00:08:33.998 "6b42daf4-24f7-4dc7-9e69-830a3fc53121" 00:08:33.998 ], 00:08:33.998 "product_name": "NVMe disk", 00:08:33.998 "block_size": 4096, 00:08:33.998 "num_blocks": 38912, 00:08:33.998 "uuid": "6b42daf4-24f7-4dc7-9e69-830a3fc53121", 00:08:33.998 "assigned_rate_limits": { 00:08:33.998 "rw_ios_per_sec": 0, 00:08:33.998 "rw_mbytes_per_sec": 0, 00:08:33.998 "r_mbytes_per_sec": 0, 00:08:33.998 "w_mbytes_per_sec": 0 00:08:33.998 }, 00:08:33.998 "claimed": false, 00:08:33.998 "zoned": false, 00:08:33.998 "supported_io_types": { 00:08:33.998 "read": true, 00:08:33.998 "write": true, 00:08:33.998 "unmap": true, 00:08:33.998 "flush": true, 00:08:33.998 "reset": true, 00:08:33.998 "nvme_admin": true, 00:08:33.998 "nvme_io": true, 00:08:33.998 "nvme_io_md": false, 00:08:33.998 "write_zeroes": true, 00:08:33.998 "zcopy": false, 00:08:33.998 "get_zone_info": false, 00:08:33.998 "zone_management": false, 00:08:33.998 "zone_append": false, 00:08:33.998 "compare": true, 00:08:33.998 "compare_and_write": true, 00:08:33.998 "abort": true, 00:08:33.998 "seek_hole": false, 00:08:33.998 "seek_data": false, 00:08:33.998 "copy": true, 00:08:33.998 "nvme_iov_md": false 00:08:33.998 }, 00:08:33.998 "memory_domains": [ 00:08:33.998 { 00:08:33.998 "dma_device_id": "system", 00:08:33.998 "dma_device_type": 1 00:08:33.998 } 00:08:33.998 ], 00:08:33.998 "driver_specific": { 00:08:33.998 "nvme": [ 00:08:33.998 { 00:08:33.998 "trid": { 00:08:33.998 "trtype": "TCP", 00:08:33.998 "adrfam": "IPv4", 00:08:33.998 "traddr": "10.0.0.2", 00:08:33.998 "trsvcid": "4420", 00:08:33.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:33.998 }, 00:08:33.998 "ctrlr_data": { 00:08:33.998 "cntlid": 1, 00:08:33.998 "vendor_id": "0x8086", 00:08:33.998 "model_number": "SPDK bdev Controller", 00:08:33.998 "serial_number": "SPDK0", 00:08:33.998 "firmware_revision": "24.09", 00:08:33.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.998 "oacs": { 00:08:33.998 "security": 0, 00:08:33.998 "format": 0, 00:08:33.998 "firmware": 0, 00:08:33.998 "ns_manage": 0 00:08:33.998 }, 00:08:33.998 "multi_ctrlr": true, 00:08:33.998 "ana_reporting": false 00:08:33.998 }, 00:08:33.998 "vs": { 00:08:33.998 "nvme_version": "1.3" 00:08:33.998 }, 00:08:33.998 "ns_data": { 00:08:33.998 "id": 1, 00:08:33.998 "can_share": true 00:08:33.998 } 00:08:33.998 } 00:08:33.998 ], 00:08:33.998 "mp_policy": "active_passive" 00:08:33.998 } 00:08:33.998 } 00:08:33.998 ] 00:08:33.998 18:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65896 00:08:33.998 18:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.998 18:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.256 Running I/O for 10 seconds... 00:08:35.189 Latency(us) 00:08:35.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.189 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:35.189 =================================================================================================================== 00:08:35.189 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:35.189 00:08:36.122 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:36.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.122 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:36.122 =================================================================================================================== 00:08:36.122 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:36.122 00:08:36.380 true 00:08:36.380 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.380 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:36.637 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.637 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.637 18:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65896 00:08:37.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.208 Nvme0n1 : 3.00 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:08:37.208 =================================================================================================================== 00:08:37.208 Total : 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:08:37.208 00:08:38.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.143 Nvme0n1 : 4.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:38.143 =================================================================================================================== 00:08:38.143 Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:38.143 00:08:39.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.080 Nvme0n1 : 5.00 7289.80 28.48 0.00 0.00 0.00 0.00 0.00 00:08:39.080 =================================================================================================================== 00:08:39.080 Total : 7289.80 28.48 0.00 0.00 0.00 0.00 0.00 00:08:39.080 00:08:40.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.494 Nvme0n1 : 6.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:40.494 =================================================================================================================== 00:08:40.494 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:40.494 00:08:41.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.430 Nvme0n1 : 7.00 7275.29 28.42 0.00 0.00 0.00 0.00 0.00 00:08:41.430 =================================================================================================================== 00:08:41.430 Total : 7275.29 28.42 0.00 0.00 0.00 0.00 0.00 00:08:41.430 00:08:42.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.367 Nvme0n1 : 8.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:42.367 =================================================================================================================== 00:08:42.367 Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:08:42.367 00:08:43.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.302 Nvme0n1 : 9.00 7168.44 28.00 0.00 0.00 0.00 0.00 0.00 00:08:43.302 =================================================================================================================== 00:08:43.302 Total : 7168.44 28.00 0.00 0.00 0.00 0.00 0.00 00:08:43.302 00:08:44.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.238 Nvme0n1 : 10.00 7124.70 27.83 0.00 0.00 0.00 0.00 0.00 00:08:44.238 =================================================================================================================== 00:08:44.238 Total : 7124.70 27.83 0.00 0.00 0.00 0.00 0.00 00:08:44.238 00:08:44.238 00:08:44.238 Latency(us) 00:08:44.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.238 Nvme0n1 : 10.02 7124.91 27.83 0.00 0.00 17959.05 14358.34 42657.98 00:08:44.238 =================================================================================================================== 00:08:44.238 Total : 7124.91 27.83 0.00 0.00 17959.05 14358.34 42657.98 00:08:44.238 0 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65867 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65867 ']' 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65867 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65867 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:44.238 killing process with pid 65867 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65867' 00:08:44.238 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65867 00:08:44.238 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.238 00:08:44.238 Latency(us) 00:08:44.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.239 =================================================================================================================== 00:08:44.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.239 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65867 00:08:44.497 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.757 18:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.015 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.015 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:45.274 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.274 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.274 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.533 [2024-07-15 18:59:12.644125] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:45.533 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:45.792 request: 00:08:45.792 { 00:08:45.792 "uuid": "714182ed-9ed2-45d6-a586-07de10b43015", 00:08:45.792 "method": "bdev_lvol_get_lvstores", 00:08:45.792 "req_id": 1 00:08:45.792 } 00:08:45.792 Got JSON-RPC error response 00:08:45.792 response: 00:08:45.792 { 00:08:45.792 "code": -19, 00:08:45.792 "message": "No such device" 00:08:45.792 } 00:08:45.792 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:45.792 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:45.792 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:45.792 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:45.792 18:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.050 aio_bdev 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b42daf4-24f7-4dc7-9e69-830a3fc53121 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=6b42daf4-24f7-4dc7-9e69-830a3fc53121 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:46.050 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.309 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b42daf4-24f7-4dc7-9e69-830a3fc53121 -t 2000 00:08:46.609 [ 00:08:46.609 { 00:08:46.609 "name": "6b42daf4-24f7-4dc7-9e69-830a3fc53121", 00:08:46.609 "aliases": [ 00:08:46.609 "lvs/lvol" 00:08:46.609 ], 00:08:46.609 "product_name": "Logical Volume", 00:08:46.609 "block_size": 4096, 00:08:46.609 "num_blocks": 38912, 00:08:46.609 "uuid": "6b42daf4-24f7-4dc7-9e69-830a3fc53121", 00:08:46.609 "assigned_rate_limits": { 00:08:46.609 "rw_ios_per_sec": 0, 00:08:46.609 "rw_mbytes_per_sec": 0, 00:08:46.609 "r_mbytes_per_sec": 0, 00:08:46.609 "w_mbytes_per_sec": 0 00:08:46.609 }, 00:08:46.609 "claimed": false, 00:08:46.609 "zoned": false, 00:08:46.609 "supported_io_types": { 00:08:46.609 "read": true, 00:08:46.609 "write": true, 00:08:46.609 "unmap": true, 00:08:46.609 "flush": false, 00:08:46.609 "reset": true, 00:08:46.609 "nvme_admin": false, 00:08:46.609 "nvme_io": false, 00:08:46.609 "nvme_io_md": false, 00:08:46.609 "write_zeroes": true, 00:08:46.609 "zcopy": false, 00:08:46.609 "get_zone_info": false, 00:08:46.609 "zone_management": false, 00:08:46.609 "zone_append": false, 00:08:46.609 "compare": false, 00:08:46.609 "compare_and_write": false, 00:08:46.609 "abort": false, 00:08:46.609 "seek_hole": true, 00:08:46.609 "seek_data": true, 00:08:46.609 "copy": false, 00:08:46.609 "nvme_iov_md": false 00:08:46.609 }, 00:08:46.609 "driver_specific": { 00:08:46.609 "lvol": { 00:08:46.609 "lvol_store_uuid": "714182ed-9ed2-45d6-a586-07de10b43015", 00:08:46.609 "base_bdev": "aio_bdev", 00:08:46.609 "thin_provision": false, 00:08:46.609 "num_allocated_clusters": 38, 00:08:46.609 "snapshot": false, 00:08:46.609 "clone": false, 00:08:46.609 "esnap_clone": false 00:08:46.609 } 00:08:46.609 } 00:08:46.609 } 00:08:46.609 ] 00:08:46.609 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:46.609 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.609 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:46.868 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.868 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:46.868 18:59:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.126 18:59:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.126 18:59:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6b42daf4-24f7-4dc7-9e69-830a3fc53121 00:08:47.383 18:59:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 714182ed-9ed2-45d6-a586-07de10b43015 00:08:47.641 18:59:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.898 18:59:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.158 00:08:48.158 real 0m18.136s 00:08:48.158 user 0m16.989s 00:08:48.158 sys 0m2.595s 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.158 ************************************ 00:08:48.158 END TEST lvs_grow_clean 00:08:48.158 ************************************ 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.158 ************************************ 00:08:48.158 START TEST lvs_grow_dirty 00:08:48.158 ************************************ 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.158 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.417 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.417 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.675 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e30592f1-878c-44a9-9e72-91db81dcb594 00:08:48.675 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.675 18:59:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e30592f1-878c-44a9-9e72-91db81dcb594 lvol 150 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.242 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.501 [2024-07-15 18:59:16.771290] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.501 [2024-07-15 18:59:16.771421] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.501 true 00:08:49.758 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:08:49.758 18:59:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.758 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.758 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.016 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:08:50.275 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:50.533 [2024-07-15 18:59:17.731758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.533 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66142 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66142 /var/tmp/bdevperf.sock 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66142 ']' 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.792 18:59:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 [2024-07-15 18:59:18.011622] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:08:50.792 [2024-07-15 18:59:18.011709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66142 ] 00:08:51.050 [2024-07-15 18:59:18.145398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.050 [2024-07-15 18:59:18.262393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.050 [2024-07-15 18:59:18.327254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.617 18:59:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.617 18:59:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:51.617 18:59:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.184 Nvme0n1 00:08:52.184 18:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:52.184 [ 00:08:52.184 { 00:08:52.184 "name": "Nvme0n1", 00:08:52.184 "aliases": [ 00:08:52.184 "e3d62a70-cd8a-4260-bd39-e563f7b5dea5" 00:08:52.184 ], 00:08:52.184 "product_name": "NVMe disk", 00:08:52.184 "block_size": 4096, 00:08:52.184 "num_blocks": 38912, 00:08:52.184 "uuid": "e3d62a70-cd8a-4260-bd39-e563f7b5dea5", 00:08:52.184 "assigned_rate_limits": { 00:08:52.184 "rw_ios_per_sec": 0, 00:08:52.184 "rw_mbytes_per_sec": 0, 00:08:52.184 "r_mbytes_per_sec": 0, 00:08:52.184 "w_mbytes_per_sec": 0 00:08:52.184 }, 00:08:52.184 "claimed": false, 00:08:52.184 "zoned": false, 00:08:52.184 "supported_io_types": { 00:08:52.184 "read": true, 00:08:52.184 "write": true, 00:08:52.184 "unmap": true, 00:08:52.184 "flush": true, 00:08:52.184 "reset": true, 00:08:52.184 "nvme_admin": true, 00:08:52.184 "nvme_io": true, 00:08:52.184 "nvme_io_md": false, 00:08:52.184 "write_zeroes": true, 00:08:52.184 "zcopy": false, 00:08:52.184 "get_zone_info": false, 00:08:52.184 "zone_management": false, 00:08:52.184 "zone_append": false, 00:08:52.184 "compare": true, 00:08:52.184 "compare_and_write": true, 00:08:52.184 "abort": true, 00:08:52.184 "seek_hole": false, 00:08:52.184 "seek_data": false, 00:08:52.184 "copy": true, 00:08:52.184 "nvme_iov_md": false 00:08:52.184 }, 00:08:52.184 "memory_domains": [ 00:08:52.184 { 00:08:52.184 "dma_device_id": "system", 00:08:52.184 "dma_device_type": 1 00:08:52.184 } 00:08:52.184 ], 00:08:52.184 "driver_specific": { 00:08:52.184 "nvme": [ 00:08:52.184 { 00:08:52.184 "trid": { 00:08:52.184 "trtype": "TCP", 00:08:52.184 "adrfam": "IPv4", 00:08:52.184 "traddr": "10.0.0.2", 00:08:52.184 "trsvcid": "4420", 00:08:52.184 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:52.184 }, 00:08:52.184 "ctrlr_data": { 00:08:52.184 "cntlid": 1, 00:08:52.184 "vendor_id": "0x8086", 00:08:52.184 "model_number": "SPDK bdev Controller", 00:08:52.184 "serial_number": "SPDK0", 00:08:52.184 "firmware_revision": "24.09", 00:08:52.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.184 "oacs": { 00:08:52.184 "security": 0, 00:08:52.184 "format": 0, 00:08:52.184 "firmware": 0, 00:08:52.184 "ns_manage": 0 00:08:52.184 }, 00:08:52.184 "multi_ctrlr": true, 00:08:52.184 "ana_reporting": false 00:08:52.184 }, 00:08:52.184 "vs": { 00:08:52.184 "nvme_version": "1.3" 00:08:52.184 }, 00:08:52.184 "ns_data": { 00:08:52.184 "id": 1, 00:08:52.184 "can_share": true 00:08:52.184 } 00:08:52.184 } 00:08:52.184 ], 00:08:52.184 "mp_policy": "active_passive" 00:08:52.184 } 00:08:52.184 } 00:08:52.184 ] 00:08:52.184 18:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66171 00:08:52.184 18:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:52.184 18:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.442 Running I/O for 10 seconds... 00:08:53.376 Latency(us) 00:08:53.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.376 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:53.376 =================================================================================================================== 00:08:53.376 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:53.376 00:08:54.310 18:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e30592f1-878c-44a9-9e72-91db81dcb594 00:08:54.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.310 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:54.310 =================================================================================================================== 00:08:54.310 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:54.310 00:08:54.574 true 00:08:54.574 18:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:08:54.574 18:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:54.846 18:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:54.846 18:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:54.846 18:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66171 00:08:55.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.414 Nvme0n1 : 3.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:55.414 =================================================================================================================== 00:08:55.414 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:55.414 00:08:56.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.350 Nvme0n1 : 4.00 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:56.350 =================================================================================================================== 00:08:56.350 Total : 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:56.350 00:08:57.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.308 Nvme0n1 : 5.00 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:08:57.308 =================================================================================================================== 00:08:57.308 Total : 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:08:57.308 00:08:58.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.686 Nvme0n1 : 6.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:08:58.686 =================================================================================================================== 00:08:58.686 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:08:58.686 00:08:59.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.255 Nvme0n1 : 7.00 7004.29 27.36 0.00 0.00 0.00 0.00 0.00 00:08:59.255 =================================================================================================================== 00:08:59.255 Total : 7004.29 27.36 0.00 0.00 0.00 0.00 0.00 00:08:59.255 00:09:00.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.633 Nvme0n1 : 8.00 7017.75 27.41 0.00 0.00 0.00 0.00 0.00 00:09:00.633 =================================================================================================================== 00:09:00.633 Total : 7017.75 27.41 0.00 0.00 0.00 0.00 0.00 00:09:00.633 00:09:01.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.569 Nvme0n1 : 9.00 7028.22 27.45 0.00 0.00 0.00 0.00 0.00 00:09:01.569 =================================================================================================================== 00:09:01.569 Total : 7028.22 27.45 0.00 0.00 0.00 0.00 0.00 00:09:01.569 00:09:02.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.505 Nvme0n1 : 10.00 6985.80 27.29 0.00 0.00 0.00 0.00 0.00 00:09:02.505 =================================================================================================================== 00:09:02.505 Total : 6985.80 27.29 0.00 0.00 0.00 0.00 0.00 00:09:02.505 00:09:02.505 00:09:02.505 Latency(us) 00:09:02.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.505 Nvme0n1 : 10.02 6987.20 27.29 0.00 0.00 18313.26 3723.64 171585.16 00:09:02.505 =================================================================================================================== 00:09:02.505 Total : 6987.20 27.29 0.00 0.00 18313.26 3723.64 171585.16 00:09:02.505 0 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66142 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66142 ']' 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66142 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66142 00:09:02.505 killing process with pid 66142 00:09:02.505 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.505 00:09:02.505 Latency(us) 00:09:02.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.505 =================================================================================================================== 00:09:02.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66142' 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66142 00:09:02.505 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66142 00:09:02.763 18:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.021 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.279 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:03.279 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65790 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65790 00:09:03.538 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65790 Killed "${NVMF_APP[@]}" "$@" 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66304 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:03.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66304 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66304 ']' 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.538 18:59:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.538 [2024-07-15 18:59:30.656145] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:03.538 [2024-07-15 18:59:30.656461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.538 [2024-07-15 18:59:30.792925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.797 [2024-07-15 18:59:30.879629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.797 [2024-07-15 18:59:30.879951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.797 [2024-07-15 18:59:30.879970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.797 [2024-07-15 18:59:30.879978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.797 [2024-07-15 18:59:30.879985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.797 [2024-07-15 18:59:30.880016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.797 [2024-07-15 18:59:30.931890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.366 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.625 [2024-07-15 18:59:31.874001] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:04.625 [2024-07-15 18:59:31.874379] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:04.625 [2024-07-15 18:59:31.874682] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:04.883 18:59:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.142 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3d62a70-cd8a-4260-bd39-e563f7b5dea5 -t 2000 00:09:05.142 [ 00:09:05.142 { 00:09:05.142 "name": "e3d62a70-cd8a-4260-bd39-e563f7b5dea5", 00:09:05.142 "aliases": [ 00:09:05.142 "lvs/lvol" 00:09:05.142 ], 00:09:05.142 "product_name": "Logical Volume", 00:09:05.142 "block_size": 4096, 00:09:05.142 "num_blocks": 38912, 00:09:05.142 "uuid": "e3d62a70-cd8a-4260-bd39-e563f7b5dea5", 00:09:05.142 "assigned_rate_limits": { 00:09:05.142 "rw_ios_per_sec": 0, 00:09:05.142 "rw_mbytes_per_sec": 0, 00:09:05.142 "r_mbytes_per_sec": 0, 00:09:05.142 "w_mbytes_per_sec": 0 00:09:05.142 }, 00:09:05.142 "claimed": false, 00:09:05.142 "zoned": false, 00:09:05.142 "supported_io_types": { 00:09:05.142 "read": true, 00:09:05.142 "write": true, 00:09:05.142 "unmap": true, 00:09:05.142 "flush": false, 00:09:05.142 "reset": true, 00:09:05.142 "nvme_admin": false, 00:09:05.142 "nvme_io": false, 00:09:05.142 "nvme_io_md": false, 00:09:05.142 "write_zeroes": true, 00:09:05.142 "zcopy": false, 00:09:05.142 "get_zone_info": false, 00:09:05.142 "zone_management": false, 00:09:05.142 "zone_append": false, 00:09:05.142 "compare": false, 00:09:05.142 "compare_and_write": false, 00:09:05.142 "abort": false, 00:09:05.142 "seek_hole": true, 00:09:05.142 "seek_data": true, 00:09:05.142 "copy": false, 00:09:05.142 "nvme_iov_md": false 00:09:05.142 }, 00:09:05.142 "driver_specific": { 00:09:05.142 "lvol": { 00:09:05.142 "lvol_store_uuid": "e30592f1-878c-44a9-9e72-91db81dcb594", 00:09:05.142 "base_bdev": "aio_bdev", 00:09:05.142 "thin_provision": false, 00:09:05.142 "num_allocated_clusters": 38, 00:09:05.142 "snapshot": false, 00:09:05.142 "clone": false, 00:09:05.142 "esnap_clone": false 00:09:05.142 } 00:09:05.142 } 00:09:05.142 } 00:09:05.142 ] 00:09:05.142 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:05.142 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:05.142 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:05.401 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:05.401 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:05.401 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:05.660 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:05.660 18:59:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.919 [2024-07-15 18:59:33.147630] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:05.919 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:06.177 request: 00:09:06.177 { 00:09:06.177 "uuid": "e30592f1-878c-44a9-9e72-91db81dcb594", 00:09:06.177 "method": "bdev_lvol_get_lvstores", 00:09:06.177 "req_id": 1 00:09:06.177 } 00:09:06.177 Got JSON-RPC error response 00:09:06.177 response: 00:09:06.177 { 00:09:06.177 "code": -19, 00:09:06.177 "message": "No such device" 00:09:06.177 } 00:09:06.177 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:06.177 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.177 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.177 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.177 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.435 aio_bdev 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:06.435 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.694 18:59:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3d62a70-cd8a-4260-bd39-e563f7b5dea5 -t 2000 00:09:06.953 [ 00:09:06.953 { 00:09:06.953 "name": "e3d62a70-cd8a-4260-bd39-e563f7b5dea5", 00:09:06.953 "aliases": [ 00:09:06.953 "lvs/lvol" 00:09:06.953 ], 00:09:06.953 "product_name": "Logical Volume", 00:09:06.953 "block_size": 4096, 00:09:06.953 "num_blocks": 38912, 00:09:06.953 "uuid": "e3d62a70-cd8a-4260-bd39-e563f7b5dea5", 00:09:06.953 "assigned_rate_limits": { 00:09:06.953 "rw_ios_per_sec": 0, 00:09:06.953 "rw_mbytes_per_sec": 0, 00:09:06.953 "r_mbytes_per_sec": 0, 00:09:06.953 "w_mbytes_per_sec": 0 00:09:06.953 }, 00:09:06.953 "claimed": false, 00:09:06.953 "zoned": false, 00:09:06.953 "supported_io_types": { 00:09:06.953 "read": true, 00:09:06.953 "write": true, 00:09:06.953 "unmap": true, 00:09:06.953 "flush": false, 00:09:06.953 "reset": true, 00:09:06.953 "nvme_admin": false, 00:09:06.953 "nvme_io": false, 00:09:06.953 "nvme_io_md": false, 00:09:06.953 "write_zeroes": true, 00:09:06.953 "zcopy": false, 00:09:06.953 "get_zone_info": false, 00:09:06.953 "zone_management": false, 00:09:06.953 "zone_append": false, 00:09:06.953 "compare": false, 00:09:06.953 "compare_and_write": false, 00:09:06.953 "abort": false, 00:09:06.953 "seek_hole": true, 00:09:06.953 "seek_data": true, 00:09:06.953 "copy": false, 00:09:06.953 "nvme_iov_md": false 00:09:06.953 }, 00:09:06.953 "driver_specific": { 00:09:06.953 "lvol": { 00:09:06.953 "lvol_store_uuid": "e30592f1-878c-44a9-9e72-91db81dcb594", 00:09:06.953 "base_bdev": "aio_bdev", 00:09:06.953 "thin_provision": false, 00:09:06.953 "num_allocated_clusters": 38, 00:09:06.953 "snapshot": false, 00:09:06.953 "clone": false, 00:09:06.953 "esnap_clone": false 00:09:06.953 } 00:09:06.953 } 00:09:06.953 } 00:09:06.953 ] 00:09:06.953 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:06.953 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:06.953 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.212 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.212 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.212 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:07.483 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.483 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e3d62a70-cd8a-4260-bd39-e563f7b5dea5 00:09:07.768 18:59:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e30592f1-878c-44a9-9e72-91db81dcb594 00:09:08.028 18:59:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.287 18:59:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.546 ************************************ 00:09:08.546 END TEST lvs_grow_dirty 00:09:08.546 ************************************ 00:09:08.546 00:09:08.546 real 0m20.359s 00:09:08.546 user 0m42.121s 00:09:08.546 sys 0m8.716s 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:08.546 nvmf_trace.0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.546 18:59:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:08.805 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.805 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:08.805 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.805 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.805 rmmod nvme_tcp 00:09:08.805 rmmod nvme_fabrics 00:09:09.065 rmmod nvme_keyring 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66304 ']' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66304 ']' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66304' 00:09:09.065 killing process with pid 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66304 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.065 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.325 18:59:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:09.325 00:09:09.325 real 0m41.015s 00:09:09.325 user 1m5.417s 00:09:09.325 sys 0m12.083s 00:09:09.325 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.325 18:59:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 ************************************ 00:09:09.325 END TEST nvmf_lvs_grow 00:09:09.325 ************************************ 00:09:09.325 18:59:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.325 18:59:36 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.325 18:59:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.325 18:59:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.325 18:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.325 ************************************ 00:09:09.325 START TEST nvmf_bdev_io_wait 00:09:09.325 ************************************ 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.325 * Looking for test storage... 00:09:09.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.325 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:09.326 Cannot find device "nvmf_tgt_br" 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.326 Cannot find device "nvmf_tgt_br2" 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:09.326 Cannot find device "nvmf_tgt_br" 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:09.326 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:09.585 Cannot find device "nvmf_tgt_br2" 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.585 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:09.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:09:09.844 00:09:09.844 --- 10.0.0.2 ping statistics --- 00:09:09.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.844 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:09.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:09.844 00:09:09.844 --- 10.0.0.3 ping statistics --- 00:09:09.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.844 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:09.844 00:09:09.844 --- 10.0.0.1 ping statistics --- 00:09:09.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.844 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66614 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66614 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66614 ']' 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.844 18:59:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.844 [2024-07-15 18:59:37.003129] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:09.844 [2024-07-15 18:59:37.003215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.103 [2024-07-15 18:59:37.144918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.103 [2024-07-15 18:59:37.258897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.103 [2024-07-15 18:59:37.258969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.103 [2024-07-15 18:59:37.258983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.103 [2024-07-15 18:59:37.258994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.104 [2024-07-15 18:59:37.259003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.104 [2024-07-15 18:59:37.259169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.104 [2024-07-15 18:59:37.261572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.104 [2024-07-15 18:59:37.261736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.104 [2024-07-15 18:59:37.261748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 [2024-07-15 18:59:38.150404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 [2024-07-15 18:59:38.166681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 Malloc0 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.041 [2024-07-15 18:59:38.230272] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66649 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66651 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.041 { 00:09:11.041 "params": { 00:09:11.041 "name": "Nvme$subsystem", 00:09:11.041 "trtype": "$TEST_TRANSPORT", 00:09:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.041 "adrfam": "ipv4", 00:09:11.041 "trsvcid": "$NVMF_PORT", 00:09:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.041 "hdgst": ${hdgst:-false}, 00:09:11.041 "ddgst": ${ddgst:-false} 00:09:11.041 }, 00:09:11.041 "method": "bdev_nvme_attach_controller" 00:09:11.041 } 00:09:11.041 EOF 00:09:11.041 )") 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66653 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.041 { 00:09:11.041 "params": { 00:09:11.041 "name": "Nvme$subsystem", 00:09:11.041 "trtype": "$TEST_TRANSPORT", 00:09:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.041 "adrfam": "ipv4", 00:09:11.041 "trsvcid": "$NVMF_PORT", 00:09:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.041 "hdgst": ${hdgst:-false}, 00:09:11.041 "ddgst": ${ddgst:-false} 00:09:11.041 }, 00:09:11.041 "method": "bdev_nvme_attach_controller" 00:09:11.041 } 00:09:11.041 EOF 00:09:11.041 )") 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66656 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.041 { 00:09:11.041 "params": { 00:09:11.041 "name": "Nvme$subsystem", 00:09:11.041 "trtype": "$TEST_TRANSPORT", 00:09:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.041 "adrfam": "ipv4", 00:09:11.041 "trsvcid": "$NVMF_PORT", 00:09:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.041 "hdgst": ${hdgst:-false}, 00:09:11.041 "ddgst": ${ddgst:-false} 00:09:11.041 }, 00:09:11.041 "method": "bdev_nvme_attach_controller" 00:09:11.041 } 00:09:11.041 EOF 00:09:11.041 )") 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.041 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.041 { 00:09:11.041 "params": { 00:09:11.041 "name": "Nvme$subsystem", 00:09:11.041 "trtype": "$TEST_TRANSPORT", 00:09:11.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.041 "adrfam": "ipv4", 00:09:11.041 "trsvcid": "$NVMF_PORT", 00:09:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.042 "hdgst": ${hdgst:-false}, 00:09:11.042 "ddgst": ${ddgst:-false} 00:09:11.042 }, 00:09:11.042 "method": "bdev_nvme_attach_controller" 00:09:11.042 } 00:09:11.042 EOF 00:09:11.042 )") 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.042 "params": { 00:09:11.042 "name": "Nvme1", 00:09:11.042 "trtype": "tcp", 00:09:11.042 "traddr": "10.0.0.2", 00:09:11.042 "adrfam": "ipv4", 00:09:11.042 "trsvcid": "4420", 00:09:11.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.042 "hdgst": false, 00:09:11.042 "ddgst": false 00:09:11.042 }, 00:09:11.042 "method": "bdev_nvme_attach_controller" 00:09:11.042 }' 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.042 "params": { 00:09:11.042 "name": "Nvme1", 00:09:11.042 "trtype": "tcp", 00:09:11.042 "traddr": "10.0.0.2", 00:09:11.042 "adrfam": "ipv4", 00:09:11.042 "trsvcid": "4420", 00:09:11.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.042 "hdgst": false, 00:09:11.042 "ddgst": false 00:09:11.042 }, 00:09:11.042 "method": "bdev_nvme_attach_controller" 00:09:11.042 }' 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.042 "params": { 00:09:11.042 "name": "Nvme1", 00:09:11.042 "trtype": "tcp", 00:09:11.042 "traddr": "10.0.0.2", 00:09:11.042 "adrfam": "ipv4", 00:09:11.042 "trsvcid": "4420", 00:09:11.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.042 "hdgst": false, 00:09:11.042 "ddgst": false 00:09:11.042 }, 00:09:11.042 "method": "bdev_nvme_attach_controller" 00:09:11.042 }' 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.042 "params": { 00:09:11.042 "name": "Nvme1", 00:09:11.042 "trtype": "tcp", 00:09:11.042 "traddr": "10.0.0.2", 00:09:11.042 "adrfam": "ipv4", 00:09:11.042 "trsvcid": "4420", 00:09:11.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.042 "hdgst": false, 00:09:11.042 "ddgst": false 00:09:11.042 }, 00:09:11.042 "method": "bdev_nvme_attach_controller" 00:09:11.042 }' 00:09:11.042 [2024-07-15 18:59:38.297554] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:11.042 [2024-07-15 18:59:38.297629] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:11.042 18:59:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66649 00:09:11.042 [2024-07-15 18:59:38.313265] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:11.042 [2024-07-15 18:59:38.313328] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:11.042 [2024-07-15 18:59:38.314232] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:11.042 [2024-07-15 18:59:38.314303] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:11.042 [2024-07-15 18:59:38.315607] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:11.042 [2024-07-15 18:59:38.315677] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:11.301 [2024-07-15 18:59:38.513084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.560 [2024-07-15 18:59:38.591335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.560 [2024-07-15 18:59:38.647269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:11.560 [2024-07-15 18:59:38.668972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.560 [2024-07-15 18:59:38.696965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:11.560 [2024-07-15 18:59:38.710490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.560 [2024-07-15 18:59:38.744503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.560 [2024-07-15 18:59:38.751881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.560 [2024-07-15 18:59:38.775538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.560 [2024-07-15 18:59:38.822989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.560 Running I/O for 1 seconds... 00:09:11.560 Running I/O for 1 seconds... 00:09:11.820 [2024-07-15 18:59:38.859843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:11.820 [2024-07-15 18:59:38.909291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.820 Running I/O for 1 seconds... 00:09:11.820 Running I/O for 1 seconds... 00:09:12.755 00:09:12.755 Latency(us) 00:09:12.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.755 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:12.755 Nvme1n1 : 1.02 4954.45 19.35 0.00 0.00 25580.97 12332.68 36700.16 00:09:12.755 =================================================================================================================== 00:09:12.755 Total : 4954.45 19.35 0.00 0.00 25580.97 12332.68 36700.16 00:09:12.755 00:09:12.755 Latency(us) 00:09:12.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.755 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:12.755 Nvme1n1 : 1.03 4872.71 19.03 0.00 0.00 25798.07 10068.71 45756.04 00:09:12.755 =================================================================================================================== 00:09:12.755 Total : 4872.71 19.03 0.00 0.00 25798.07 10068.71 45756.04 00:09:12.755 00:09:12.755 Latency(us) 00:09:12.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.755 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:12.755 Nvme1n1 : 1.01 5039.22 19.68 0.00 0.00 25312.47 6225.92 54573.61 00:09:12.755 =================================================================================================================== 00:09:12.755 Total : 5039.22 19.68 0.00 0.00 25312.47 6225.92 54573.61 00:09:12.755 00:09:12.755 Latency(us) 00:09:12.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.755 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:12.755 Nvme1n1 : 1.00 176368.39 688.94 0.00 0.00 723.08 327.68 1072.41 00:09:12.755 =================================================================================================================== 00:09:12.755 Total : 176368.39 688.94 0.00 0.00 723.08 327.68 1072.41 00:09:13.013 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66651 00:09:13.014 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66653 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66656 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.272 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.273 rmmod nvme_tcp 00:09:13.273 rmmod nvme_fabrics 00:09:13.273 rmmod nvme_keyring 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66614 ']' 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66614 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66614 ']' 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66614 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66614 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66614' 00:09:13.273 killing process with pid 66614 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66614 00:09:13.273 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66614 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:13.531 00:09:13.531 real 0m4.271s 00:09:13.531 user 0m18.948s 00:09:13.531 sys 0m2.174s 00:09:13.531 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.532 18:59:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.532 ************************************ 00:09:13.532 END TEST nvmf_bdev_io_wait 00:09:13.532 ************************************ 00:09:13.532 18:59:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.532 18:59:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.532 18:59:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.532 18:59:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.532 18:59:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.532 ************************************ 00:09:13.532 START TEST nvmf_queue_depth 00:09:13.532 ************************************ 00:09:13.532 18:59:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.532 * Looking for test storage... 00:09:13.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:13.792 Cannot find device "nvmf_tgt_br" 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.792 Cannot find device "nvmf_tgt_br2" 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:13.792 Cannot find device "nvmf_tgt_br" 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:13.792 Cannot find device "nvmf_tgt_br2" 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.792 18:59:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.792 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:14.052 00:09:14.052 --- 10.0.0.2 ping statistics --- 00:09:14.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.052 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:09:14.052 00:09:14.052 --- 10.0.0.3 ping statistics --- 00:09:14.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.052 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:09:14.052 00:09:14.052 --- 10.0.0.1 ping statistics --- 00:09:14.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.052 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66887 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66887 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66887 ']' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.052 18:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.052 [2024-07-15 18:59:41.241901] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:14.052 [2024-07-15 18:59:41.241999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.313 [2024-07-15 18:59:41.385096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.313 [2024-07-15 18:59:41.489942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.313 [2024-07-15 18:59:41.489999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.313 [2024-07-15 18:59:41.490014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.313 [2024-07-15 18:59:41.490024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.313 [2024-07-15 18:59:41.490033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.313 [2024-07-15 18:59:41.490063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.313 [2024-07-15 18:59:41.548596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 [2024-07-15 18:59:42.270886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 Malloc0 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 [2024-07-15 18:59:42.334528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66921 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66921 /var/tmp/bdevperf.sock 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66921 ']' 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.259 18:59:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 [2024-07-15 18:59:42.394986] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:15.259 [2024-07-15 18:59:42.395079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66921 ] 00:09:15.259 [2024-07-15 18:59:42.536218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.518 [2024-07-15 18:59:42.629127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.518 [2024-07-15 18:59:42.685625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.086 18:59:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.086 18:59:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:16.086 18:59:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:16.086 18:59:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.086 18:59:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.344 NVMe0n1 00:09:16.344 18:59:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.344 18:59:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.344 Running I/O for 10 seconds... 00:09:26.349 00:09:26.349 Latency(us) 00:09:26.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.349 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:26.349 Verification LBA range: start 0x0 length 0x4000 00:09:26.349 NVMe0n1 : 10.07 9633.46 37.63 0.00 0.00 105789.70 26691.03 81502.95 00:09:26.349 =================================================================================================================== 00:09:26.349 Total : 9633.46 37.63 0.00 0.00 105789.70 26691.03 81502.95 00:09:26.349 0 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66921 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66921 ']' 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66921 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66921 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.349 killing process with pid 66921 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66921' 00:09:26.349 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.349 00:09:26.349 Latency(us) 00:09:26.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.349 =================================================================================================================== 00:09:26.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66921 00:09:26.349 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66921 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.607 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.607 rmmod nvme_tcp 00:09:26.866 rmmod nvme_fabrics 00:09:26.866 rmmod nvme_keyring 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66887 ']' 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66887 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66887 ']' 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66887 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66887 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:26.866 killing process with pid 66887 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66887' 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66887 00:09:26.866 18:59:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66887 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.125 ************************************ 00:09:27.125 END TEST nvmf_queue_depth 00:09:27.125 ************************************ 00:09:27.125 00:09:27.125 real 0m13.506s 00:09:27.125 user 0m23.513s 00:09:27.125 sys 0m2.046s 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.125 18:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.125 18:59:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.125 18:59:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.125 18:59:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.125 18:59:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.125 18:59:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.125 ************************************ 00:09:27.125 START TEST nvmf_target_multipath 00:09:27.125 ************************************ 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.125 * Looking for test storage... 00:09:27.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:27.125 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.126 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.384 Cannot find device "nvmf_tgt_br" 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:27.384 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.384 Cannot find device "nvmf_tgt_br2" 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.385 Cannot find device "nvmf_tgt_br" 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.385 Cannot find device "nvmf_tgt_br2" 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.385 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:27.643 00:09:27.643 --- 10.0.0.2 ping statistics --- 00:09:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.643 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:27.643 00:09:27.643 --- 10.0.0.3 ping statistics --- 00:09:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.643 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:09:27.643 00:09:27.643 --- 10.0.0.1 ping statistics --- 00:09:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.643 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67244 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67244 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67244 ']' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.643 18:59:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.643 [2024-07-15 18:59:54.811534] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:27.643 [2024-07-15 18:59:54.811623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.901 [2024-07-15 18:59:54.954731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.902 [2024-07-15 18:59:55.058459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.902 [2024-07-15 18:59:55.058535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.902 [2024-07-15 18:59:55.058550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.902 [2024-07-15 18:59:55.058560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.902 [2024-07-15 18:59:55.058570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.902 [2024-07-15 18:59:55.058708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.902 [2024-07-15 18:59:55.059129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.902 [2024-07-15 18:59:55.059365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.902 [2024-07-15 18:59:55.059414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.902 [2024-07-15 18:59:55.115777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.838 18:59:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.838 [2024-07-15 18:59:56.090921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.097 18:59:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:29.356 Malloc0 00:09:29.356 18:59:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:29.614 18:59:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.874 18:59:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.874 [2024-07-15 18:59:57.136118] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.874 18:59:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.132 [2024-07-15 18:59:57.368650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.132 18:59:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.392 18:59:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:32.960 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67341 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:32.961 18:59:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:32.961 [global] 00:09:32.961 thread=1 00:09:32.961 invalidate=1 00:09:32.961 rw=randrw 00:09:32.961 time_based=1 00:09:32.961 runtime=6 00:09:32.961 ioengine=libaio 00:09:32.961 direct=1 00:09:32.961 bs=4096 00:09:32.961 iodepth=128 00:09:32.961 norandommap=0 00:09:32.961 numjobs=1 00:09:32.961 00:09:32.961 verify_dump=1 00:09:32.961 verify_backlog=512 00:09:32.961 verify_state_save=0 00:09:32.961 do_verify=1 00:09:32.961 verify=crc32c-intel 00:09:32.961 [job0] 00:09:32.961 filename=/dev/nvme0n1 00:09:32.961 Could not set queue depth (nvme0n1) 00:09:32.961 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.961 fio-3.35 00:09:32.961 Starting 1 thread 00:09:33.529 19:00:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:33.788 19:00:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.047 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:34.306 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.574 19:00:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67341 00:09:38.776 00:09:38.776 job0: (groupid=0, jobs=1): err= 0: pid=67362: Mon Jul 15 19:00:06 2024 00:09:38.776 read: IOPS=9232, BW=36.1MiB/s (37.8MB/s)(217MiB/6007msec) 00:09:38.776 slat (usec): min=6, max=6165, avg=64.37, stdev=253.41 00:09:38.776 clat (usec): min=1983, max=17754, avg=9491.40, stdev=1699.90 00:09:38.776 lat (usec): min=1993, max=17770, avg=9555.77, stdev=1704.85 00:09:38.776 clat percentiles (usec): 00:09:38.776 | 1.00th=[ 5014], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8455], 00:09:38.776 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:09:38.776 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10945], 95.00th=[13304], 00:09:38.776 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16909], 99.95th=[17171], 00:09:38.776 | 99.99th=[17695] 00:09:38.776 bw ( KiB/s): min= 5960, max=24176, per=50.88%, avg=18791.64, stdev=5608.34, samples=11 00:09:38.776 iops : min= 1490, max= 6044, avg=4697.82, stdev=1402.16, samples=11 00:09:38.776 write: IOPS=5364, BW=21.0MiB/s (22.0MB/s)(111MiB/5299msec); 0 zone resets 00:09:38.776 slat (usec): min=14, max=3617, avg=74.47, stdev=184.02 00:09:38.776 clat (usec): min=2865, max=17580, avg=8282.59, stdev=1468.50 00:09:38.776 lat (usec): min=2890, max=17607, avg=8357.06, stdev=1473.25 00:09:38.776 clat percentiles (usec): 00:09:38.776 | 1.00th=[ 3785], 5.00th=[ 5080], 10.00th=[ 6456], 20.00th=[ 7570], 00:09:38.776 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:09:38.776 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:09:38.776 | 99.00th=[12649], 99.50th=[13435], 99.90th=[14746], 99.95th=[15401], 00:09:38.776 | 99.99th=[15926] 00:09:38.776 bw ( KiB/s): min= 6240, max=23752, per=87.60%, avg=18796.73, stdev=5352.34, samples=11 00:09:38.776 iops : min= 1560, max= 5938, avg=4699.09, stdev=1338.12, samples=11 00:09:38.776 lat (msec) : 2=0.01%, 4=0.67%, 10=80.75%, 20=18.58% 00:09:38.776 cpu : usr=5.33%, sys=21.13%, ctx=4860, majf=0, minf=96 00:09:38.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:38.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.776 issued rwts: total=55459,28424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.776 00:09:38.776 Run status group 0 (all jobs): 00:09:38.776 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=217MiB (227MB), run=6007-6007msec 00:09:38.776 WRITE: bw=21.0MiB/s (22.0MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=111MiB (116MB), run=5299-5299msec 00:09:38.776 00:09:38.776 Disk stats (read/write): 00:09:38.776 nvme0n1: ios=54690/27932, merge=0/0, ticks=498947/218399, in_queue=717346, util=98.73% 00:09:38.776 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:39.343 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67442 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:39.601 19:00:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:39.601 [global] 00:09:39.601 thread=1 00:09:39.601 invalidate=1 00:09:39.601 rw=randrw 00:09:39.601 time_based=1 00:09:39.601 runtime=6 00:09:39.601 ioengine=libaio 00:09:39.601 direct=1 00:09:39.601 bs=4096 00:09:39.601 iodepth=128 00:09:39.601 norandommap=0 00:09:39.601 numjobs=1 00:09:39.601 00:09:39.601 verify_dump=1 00:09:39.601 verify_backlog=512 00:09:39.601 verify_state_save=0 00:09:39.601 do_verify=1 00:09:39.601 verify=crc32c-intel 00:09:39.601 [job0] 00:09:39.601 filename=/dev/nvme0n1 00:09:39.601 Could not set queue depth (nvme0n1) 00:09:39.601 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.601 fio-3.35 00:09:39.601 Starting 1 thread 00:09:40.538 19:00:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:40.796 19:00:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:41.054 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:41.312 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:41.571 19:00:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67442 00:09:45.758 00:09:45.758 job0: (groupid=0, jobs=1): err= 0: pid=67463: Mon Jul 15 19:00:12 2024 00:09:45.758 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(236MiB/6007msec) 00:09:45.758 slat (usec): min=2, max=8069, avg=50.90, stdev=221.94 00:09:45.758 clat (usec): min=423, max=19200, avg=8786.91, stdev=2355.32 00:09:45.758 lat (usec): min=434, max=19218, avg=8837.81, stdev=2373.79 00:09:45.758 clat percentiles (usec): 00:09:45.758 | 1.00th=[ 3490], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[ 6915], 00:09:45.758 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:09:45.758 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11338], 95.00th=[13042], 00:09:45.758 | 99.00th=[15664], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:09:45.758 | 99.99th=[19006] 00:09:45.758 bw ( KiB/s): min= 4832, max=33272, per=51.35%, avg=20698.91, stdev=7903.60, samples=11 00:09:45.758 iops : min= 1208, max= 8318, avg=5174.73, stdev=1975.90, samples=11 00:09:45.758 write: IOPS=6025, BW=23.5MiB/s (24.7MB/s)(123MiB/5209msec); 0 zone resets 00:09:45.758 slat (usec): min=4, max=6058, avg=59.71, stdev=156.35 00:09:45.758 clat (usec): min=465, max=19184, avg=7321.95, stdev=2204.01 00:09:45.758 lat (usec): min=521, max=19212, avg=7381.66, stdev=2222.08 00:09:45.758 clat percentiles (usec): 00:09:45.758 | 1.00th=[ 3032], 5.00th=[ 3884], 10.00th=[ 4359], 20.00th=[ 5080], 00:09:45.758 | 30.00th=[ 5866], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8160], 00:09:45.758 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10814], 00:09:45.758 | 99.00th=[13042], 99.50th=[14091], 99.90th=[16057], 99.95th=[16712], 00:09:45.758 | 99.99th=[17695] 00:09:45.758 bw ( KiB/s): min= 5208, max=33216, per=86.16%, avg=20768.73, stdev=7839.75, samples=11 00:09:45.758 iops : min= 1302, max= 8304, avg=5192.18, stdev=1959.94, samples=11 00:09:45.758 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:45.758 lat (msec) : 2=0.15%, 4=3.11%, 10=78.96%, 20=17.77% 00:09:45.758 cpu : usr=5.66%, sys=22.89%, ctx=5299, majf=0, minf=114 00:09:45.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.758 issued rwts: total=60532,31388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.758 00:09:45.758 Run status group 0 (all jobs): 00:09:45.758 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=236MiB (248MB), run=6007-6007msec 00:09:45.758 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=123MiB (129MB), run=5209-5209msec 00:09:45.758 00:09:45.758 Disk stats (read/write): 00:09:45.758 nvme0n1: ios=59965/30593, merge=0/0, ticks=503186/208178, in_queue=711364, util=98.70% 00:09:45.758 19:00:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:46.019 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.277 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:46.277 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:46.277 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:46.277 19:00:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:46.277 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.278 rmmod nvme_tcp 00:09:46.278 rmmod nvme_fabrics 00:09:46.278 rmmod nvme_keyring 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67244 ']' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67244 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67244 ']' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67244 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67244 00:09:46.278 killing process with pid 67244 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67244' 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67244 00:09:46.278 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67244 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.536 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.796 19:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:46.796 ************************************ 00:09:46.796 END TEST nvmf_target_multipath 00:09:46.796 ************************************ 00:09:46.796 00:09:46.796 real 0m19.527s 00:09:46.796 user 1m14.152s 00:09:46.796 sys 0m9.037s 00:09:46.796 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.796 19:00:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.796 19:00:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:46.796 19:00:13 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.796 19:00:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.796 19:00:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.796 19:00:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.796 ************************************ 00:09:46.796 START TEST nvmf_zcopy 00:09:46.796 ************************************ 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.796 * Looking for test storage... 00:09:46.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.796 19:00:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:46.796 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:46.797 Cannot find device "nvmf_tgt_br" 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.797 Cannot find device "nvmf_tgt_br2" 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:46.797 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:46.797 Cannot find device "nvmf_tgt_br" 00:09:47.055 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:47.056 Cannot find device "nvmf_tgt_br2" 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:47.056 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:47.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:47.315 00:09:47.315 --- 10.0.0.2 ping statistics --- 00:09:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.315 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:47.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:47.315 00:09:47.315 --- 10.0.0.3 ping statistics --- 00:09:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.315 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:47.315 00:09:47.315 --- 10.0.0.1 ping statistics --- 00:09:47.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.315 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67715 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67715 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67715 ']' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.315 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.315 [2024-07-15 19:00:14.480945] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:47.315 [2024-07-15 19:00:14.481381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.575 [2024-07-15 19:00:14.621874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.575 [2024-07-15 19:00:14.719695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.575 [2024-07-15 19:00:14.720172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.575 [2024-07-15 19:00:14.720271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.575 [2024-07-15 19:00:14.720380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.575 [2024-07-15 19:00:14.720459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.575 [2024-07-15 19:00:14.720573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.575 [2024-07-15 19:00:14.772149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.575 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.575 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:47.575 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.575 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.575 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 [2024-07-15 19:00:14.887983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 [2024-07-15 19:00:14.903954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 malloc0 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:47.834 { 00:09:47.834 "params": { 00:09:47.834 "name": "Nvme$subsystem", 00:09:47.834 "trtype": "$TEST_TRANSPORT", 00:09:47.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.834 "adrfam": "ipv4", 00:09:47.834 "trsvcid": "$NVMF_PORT", 00:09:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.834 "hdgst": ${hdgst:-false}, 00:09:47.834 "ddgst": ${ddgst:-false} 00:09:47.834 }, 00:09:47.834 "method": "bdev_nvme_attach_controller" 00:09:47.834 } 00:09:47.834 EOF 00:09:47.834 )") 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:47.834 19:00:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:47.834 "params": { 00:09:47.834 "name": "Nvme1", 00:09:47.834 "trtype": "tcp", 00:09:47.834 "traddr": "10.0.0.2", 00:09:47.834 "adrfam": "ipv4", 00:09:47.834 "trsvcid": "4420", 00:09:47.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.834 "hdgst": false, 00:09:47.834 "ddgst": false 00:09:47.834 }, 00:09:47.834 "method": "bdev_nvme_attach_controller" 00:09:47.834 }' 00:09:47.834 [2024-07-15 19:00:15.000639] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:47.834 [2024-07-15 19:00:15.000755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67740 ] 00:09:48.093 [2024-07-15 19:00:15.143944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.093 [2024-07-15 19:00:15.274238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.093 [2024-07-15 19:00:15.340585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.351 Running I/O for 10 seconds... 00:09:58.351 00:09:58.351 Latency(us) 00:09:58.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:58.351 Verification LBA range: start 0x0 length 0x1000 00:09:58.351 Nvme1n1 : 10.02 6432.93 50.26 0.00 0.00 19836.81 2606.55 27644.28 00:09:58.351 =================================================================================================================== 00:09:58.351 Total : 6432.93 50.26 0.00 0.00 19836.81 2606.55 27644.28 00:09:58.619 19:00:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67857 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:58.620 { 00:09:58.620 "params": { 00:09:58.620 "name": "Nvme$subsystem", 00:09:58.620 "trtype": "$TEST_TRANSPORT", 00:09:58.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.620 "adrfam": "ipv4", 00:09:58.620 "trsvcid": "$NVMF_PORT", 00:09:58.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.620 "hdgst": ${hdgst:-false}, 00:09:58.620 "ddgst": ${ddgst:-false} 00:09:58.620 }, 00:09:58.620 "method": "bdev_nvme_attach_controller" 00:09:58.620 } 00:09:58.620 EOF 00:09:58.620 )") 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:58.620 [2024-07-15 19:00:25.702723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.702769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:58.620 19:00:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:58.620 "params": { 00:09:58.620 "name": "Nvme1", 00:09:58.620 "trtype": "tcp", 00:09:58.620 "traddr": "10.0.0.2", 00:09:58.620 "adrfam": "ipv4", 00:09:58.620 "trsvcid": "4420", 00:09:58.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.620 "hdgst": false, 00:09:58.620 "ddgst": false 00:09:58.620 }, 00:09:58.620 "method": "bdev_nvme_attach_controller" 00:09:58.620 }' 00:09:58.620 [2024-07-15 19:00:25.718683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.718712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.730680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.730706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.737934] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:09:58.620 [2024-07-15 19:00:25.738009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67857 ] 00:09:58.620 [2024-07-15 19:00:25.742684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.742851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.754692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.754720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.766692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.766717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.778722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.778753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.790710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.790736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.802711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.802737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.814712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.814737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.826728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.826754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.838768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.838801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.850732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.850758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.862732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.862757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.871071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.620 [2024-07-15 19:00:25.874737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.874900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.886771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.886970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.620 [2024-07-15 19:00:25.898767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.620 [2024-07-15 19:00:25.899000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.910794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.911004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.922804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.923017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.934833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.935129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.946820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.947031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.958819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.959028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.970808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.970838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.982797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.982837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:25.989810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.885 [2024-07-15 19:00:25.994792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:25.994833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.006797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.006839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.018821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.018850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.030837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.030867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.042824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.042855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.052879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.885 [2024-07-15 19:00:26.054829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.054856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.066849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.066899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.078909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.078968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.090870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.090909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.102883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.102922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.114900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.114945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.126898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.126938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.138928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.138969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.150959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.151012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.885 [2024-07-15 19:00:26.162945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.885 [2024-07-15 19:00:26.162983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.174958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.175000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 Running I/O for 5 seconds... 00:09:59.144 [2024-07-15 19:00:26.191701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.191745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.208022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.208095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.225713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.225773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.241283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.241331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.259955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.260030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.274004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.274038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.289710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.289743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.306041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.306074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.323127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.323160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.339326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.339359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.144 [2024-07-15 19:00:26.355448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.144 [2024-07-15 19:00:26.355480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.145 [2024-07-15 19:00:26.373617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.145 [2024-07-15 19:00:26.373650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.145 [2024-07-15 19:00:26.388374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.145 [2024-07-15 19:00:26.388438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.145 [2024-07-15 19:00:26.402981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.145 [2024-07-15 19:00:26.403013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.145 [2024-07-15 19:00:26.419311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.145 [2024-07-15 19:00:26.419344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.402 [2024-07-15 19:00:26.436989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.402 [2024-07-15 19:00:26.437023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.402 [2024-07-15 19:00:26.451255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.402 [2024-07-15 19:00:26.451288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.402 [2024-07-15 19:00:26.466708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.402 [2024-07-15 19:00:26.466743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.402 [2024-07-15 19:00:26.484210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.402 [2024-07-15 19:00:26.484247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.402 [2024-07-15 19:00:26.500648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.402 [2024-07-15 19:00:26.500681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.518938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.518970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.534423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.534456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.546167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.546201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.562634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.562667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.579452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.579485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.597442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.597475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.612278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.612316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.627635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.627670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.636984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.637018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.653228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.653262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.663422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.663460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.403 [2024-07-15 19:00:26.678307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.403 [2024-07-15 19:00:26.678341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.695112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.695145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.713203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.713236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.727650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.727683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.744236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.744272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.761197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.761231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.778651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.778686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.792922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.792955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.808477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.808539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.825478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.825554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.841776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.841810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.858694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.858726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.873741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.873774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.888861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.888910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.898060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.898108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.914343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.914376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.929710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.929743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.660 [2024-07-15 19:00:26.946190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.660 [2024-07-15 19:00:26.946236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:26.963099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:26.963140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:26.979690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:26.979725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:26.994906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:26.994945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.011039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.011073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.026902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.026936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.043424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.043465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.060097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.060177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.077156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.077203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.093390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.093434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.110628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.110683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.127030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.127081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.145494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.145535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.159552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.159612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.175102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.175135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.184694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.184741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.918 [2024-07-15 19:00:27.199869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.918 [2024-07-15 19:00:27.199917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-15 19:00:27.215831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-15 19:00:27.215872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-15 19:00:27.234037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-15 19:00:27.234104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-15 19:00:27.247695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-15 19:00:27.247751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-15 19:00:27.264386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.264481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.279719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.279767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.296833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.296883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.311731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.311778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.327651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.327694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.345927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.345983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.359422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.359466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.375390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.375430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.392017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.392067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.409818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.409851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.424493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.424544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.439706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.439740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.177 [2024-07-15 19:00:27.455390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.177 [2024-07-15 19:00:27.455425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.472309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.472347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.489686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.489720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.507830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.507864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.522268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.522298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.538339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.538371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.555309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.555340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.572733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.572765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.588340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.588403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.607335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.607369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.621121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.621179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.636651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.636685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.646470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.646531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-15 19:00:27.662118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-15 19:00:27.662161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-15 19:00:27.673500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-15 19:00:27.673556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-15 19:00:27.688553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-15 19:00:27.688590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-15 19:00:27.704386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-15 19:00:27.704437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-15 19:00:27.722247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-15 19:00:27.722288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.737129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.737172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.753959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.754007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.769966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.770007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.786705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.786750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.802247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.802284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.811773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.811806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.826997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.827031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.843013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.843046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.859684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.859717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.876226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.876262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.892422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.892475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.910148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.910198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.926243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.926292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.943903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.943953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.957897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.957944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.692 [2024-07-15 19:00:27.973703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.692 [2024-07-15 19:00:27.973749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:27.990018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:27.990066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.000007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.000050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.016715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.016780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.032284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.032342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.047978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.048021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.066887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.066946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.081162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.081205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.097391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.097442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.113472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.113536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.129826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.129876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.139218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.139250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.155851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.155899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.174543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.174587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.189348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.189382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.204618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.204651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.215472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.215532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.950 [2024-07-15 19:00:28.232004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.950 [2024-07-15 19:00:28.232037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.248315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.248351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.264738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.264771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.281748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.281788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.298137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.298181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.315911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.315948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.331331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.331365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.342714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.342754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.359029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.359064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.375516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.375574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.393725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.393771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.409057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.409101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.425000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.425080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.442296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.442341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.458121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.458165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.476629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.476669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-07-15 19:00:28.492290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-07-15 19:00:28.492327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.502236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.502271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.517809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.517859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.535646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.535678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.551981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.552016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.568812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.568864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.585148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.585198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.594508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.594561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.609652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.609702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.625608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.625660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.642984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.643035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.659230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.659282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.675689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.675728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.693337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.693390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.709172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.709222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.719434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.719473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.735495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.735595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.466 [2024-07-15 19:00:28.751198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.466 [2024-07-15 19:00:28.751257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.723 [2024-07-15 19:00:28.767298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.723 [2024-07-15 19:00:28.767332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.723 [2024-07-15 19:00:28.777325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.723 [2024-07-15 19:00:28.777359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.723 [2024-07-15 19:00:28.792044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.723 [2024-07-15 19:00:28.792129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.723 [2024-07-15 19:00:28.801975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.723 [2024-07-15 19:00:28.802009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.723 [2024-07-15 19:00:28.818124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.723 [2024-07-15 19:00:28.818156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.833931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.833963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.851629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.851689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.867556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.867600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.886358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.886417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.900692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.900722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.916105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.916142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.933552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.933638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.948465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.948528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.964615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.964643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.980896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.980928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.724 [2024-07-15 19:00:28.999896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.724 [2024-07-15 19:00:28.999928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.014491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.014554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.032645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.032675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.047747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.047779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.058871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.058929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.074322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.074356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.092128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.092175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.107529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.107575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.122949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.122984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.132596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.132644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.147290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.147326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.162667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.162718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.172513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.172564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.187483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.187543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.205553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.205597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.221751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.221795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.238230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.238274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.255103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.255157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.982 [2024-07-15 19:00:29.270492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.982 [2024-07-15 19:00:29.270576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.248 [2024-07-15 19:00:29.279964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.248 [2024-07-15 19:00:29.280002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.248 [2024-07-15 19:00:29.295639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.248 [2024-07-15 19:00:29.295673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.248 [2024-07-15 19:00:29.310608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.248 [2024-07-15 19:00:29.310643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.319779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.319811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.335091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.335122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.351263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.351297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.368315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.368351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.385179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.385214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.401576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.401629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.418688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.418746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.434752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.434818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.453761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.453822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.468286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.468342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.477448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.477490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.492897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.492943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.509369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.509408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.249 [2024-07-15 19:00:29.525652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.249 [2024-07-15 19:00:29.525698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.542612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.542669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.558804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.558852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.575769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.575818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.592680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.592715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.608821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.608870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.627024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.627056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.643819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.643867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.659152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.659184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.678041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.678074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.691706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.691738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.706199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.706230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.721214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.721247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.732896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.732924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.748644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.748679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.767136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.767169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.508 [2024-07-15 19:00:29.782474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.508 [2024-07-15 19:00:29.782543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.798021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.798092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.815719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.815754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.831950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.831990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.848793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.848840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.865744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.865778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.880970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.881009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.897811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.897860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.914554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.914600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.931738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.931796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.945161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.945205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.962033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.962069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.977034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.977089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:29.992663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:29.992693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:30.010126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:30.010159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:30.025345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:30.025378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:30.037038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:30.037070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.769 [2024-07-15 19:00:30.054156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.769 [2024-07-15 19:00:30.054206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.068964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.069028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.083826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.083875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.101161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.101210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.115663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.115710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.132189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.132254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.148348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.148398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.165271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.165319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.182556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.182611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.198810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.198854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.215900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.215940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.231074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.231112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.247444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.247485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.264185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.264216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.280973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.281013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.296840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.296874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.031 [2024-07-15 19:00:30.306331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.031 [2024-07-15 19:00:30.306363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.321369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.321434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.336774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.336806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.345609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.345642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.360979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.361011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.377147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.377180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.392483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.392536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.407902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.407947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.425929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.425961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.441558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.441589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.459413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.459446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.473868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.290 [2024-07-15 19:00:30.473916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.290 [2024-07-15 19:00:30.489715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.489747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.291 [2024-07-15 19:00:30.506060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.506099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.291 [2024-07-15 19:00:30.523224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.523258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.291 [2024-07-15 19:00:30.539237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.539275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.291 [2024-07-15 19:00:30.557021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.557056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.291 [2024-07-15 19:00:30.571695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.291 [2024-07-15 19:00:30.571728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.587570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.587628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.606163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.606218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.620750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.620792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.630238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.630279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.645387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.645427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.661911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.661956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.677979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.678014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.695234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.695265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.712012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.712087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.727320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.727354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.741676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.741708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.758189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.758224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.774034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.774078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.783579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.783654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.798617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.798671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.814491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.814567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.550 [2024-07-15 19:00:30.832887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.550 [2024-07-15 19:00:30.832958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.847321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.847373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.857688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.857750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.873029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.873077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.890396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.890442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.905851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.905927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.922091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.922150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.940541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.940592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.954604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.954640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.970521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.970568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:30.987423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:30.987483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:31.004009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:31.004059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:31.021415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:31.021480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:31.035896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:31.035945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:31.052320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:31.052373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.808 [2024-07-15 19:00:31.068637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.808 [2024-07-15 19:00:31.068686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.809 [2024-07-15 19:00:31.085158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.809 [2024-07-15 19:00:31.085206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.101819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.101867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.119166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.119214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.135788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.135843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.151741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.151795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.170168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.170217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.183392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.183440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 00:10:04.067 Latency(us) 00:10:04.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.067 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:04.067 Nvme1n1 : 5.01 12285.58 95.98 0.00 0.00 10405.13 4289.63 20137.43 00:10:04.067 =================================================================================================================== 00:10:04.067 Total : 12285.58 95.98 0.00 0.00 10405.13 4289.63 20137.43 00:10:04.067 [2024-07-15 19:00:31.193294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.193343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.205273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.205319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.217283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.217329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.229291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.229341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.241309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.241358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.253317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.253371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.265318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.265368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.277318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.277368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.289333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.067 [2024-07-15 19:00:31.289383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.067 [2024-07-15 19:00:31.301338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.068 [2024-07-15 19:00:31.301387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.068 [2024-07-15 19:00:31.313341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.068 [2024-07-15 19:00:31.313389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.068 [2024-07-15 19:00:31.325335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.068 [2024-07-15 19:00:31.325382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.068 [2024-07-15 19:00:31.337326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.068 [2024-07-15 19:00:31.337366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.068 [2024-07-15 19:00:31.349371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.068 [2024-07-15 19:00:31.349418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.361353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.361404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.373389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.373420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.385346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.385389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.397364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.397415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.409369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.409424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 [2024-07-15 19:00:31.421370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.326 [2024-07-15 19:00:31.421417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.326 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67857) - No such process 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67857 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.326 19:00:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.327 delay0 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.327 19:00:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:04.327 [2024-07-15 19:00:31.607622] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:12.450 Initializing NVMe Controllers 00:10:12.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:12.450 Initialization complete. Launching workers. 00:10:12.450 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 20074 00:10:12.450 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20236, failed to submit 98 00:10:12.450 success 20131, unsuccess 105, failed 0 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.450 rmmod nvme_tcp 00:10:12.450 rmmod nvme_fabrics 00:10:12.450 rmmod nvme_keyring 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67715 ']' 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67715 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67715 ']' 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67715 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67715 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:12.450 killing process with pid 67715 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67715' 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67715 00:10:12.450 19:00:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67715 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.450 00:10:12.450 real 0m25.158s 00:10:12.450 user 0m40.889s 00:10:12.450 sys 0m7.490s 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.450 19:00:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.450 ************************************ 00:10:12.450 END TEST nvmf_zcopy 00:10:12.450 ************************************ 00:10:12.450 19:00:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.450 19:00:39 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:12.450 19:00:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.450 19:00:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.450 19:00:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.450 ************************************ 00:10:12.450 START TEST nvmf_nmic 00:10:12.450 ************************************ 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:12.450 * Looking for test storage... 00:10:12.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.450 19:00:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.451 Cannot find device "nvmf_tgt_br" 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.451 Cannot find device "nvmf_tgt_br2" 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.451 Cannot find device "nvmf_tgt_br" 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.451 Cannot find device "nvmf_tgt_br2" 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:10:12.451 00:10:12.451 --- 10.0.0.2 ping statistics --- 00:10:12.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.451 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:10:12.451 00:10:12.451 --- 10.0.0.3 ping statistics --- 00:10:12.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.451 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:12.451 00:10:12.451 --- 10.0.0.1 ping statistics --- 00:10:12.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.451 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.451 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68187 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68187 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68187 ']' 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.452 19:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.452 [2024-07-15 19:00:39.668855] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:12.452 [2024-07-15 19:00:39.668967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.710 [2024-07-15 19:00:39.810462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.710 [2024-07-15 19:00:39.942163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.710 [2024-07-15 19:00:39.942223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.710 [2024-07-15 19:00:39.942248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.710 [2024-07-15 19:00:39.942258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.710 [2024-07-15 19:00:39.942268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.710 [2024-07-15 19:00:39.942431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.710 [2024-07-15 19:00:39.942565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.710 [2024-07-15 19:00:39.942843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.710 [2024-07-15 19:00:39.942862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.968 [2024-07-15 19:00:40.004399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 [2024-07-15 19:00:40.763276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 Malloc0 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.536 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 [2024-07-15 19:00:40.848996] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.795 test case1: single bdev can't be used in multiple subsystems 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.795 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 [2024-07-15 19:00:40.872859] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:13.795 [2024-07-15 19:00:40.872932] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:13.795 [2024-07-15 19:00:40.872945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.795 request: 00:10:13.795 { 00:10:13.795 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:13.795 "namespace": { 00:10:13.795 "bdev_name": "Malloc0", 00:10:13.795 "no_auto_visible": false 00:10:13.795 }, 00:10:13.795 "method": "nvmf_subsystem_add_ns", 00:10:13.795 "req_id": 1 00:10:13.795 } 00:10:13.795 Got JSON-RPC error response 00:10:13.795 response: 00:10:13.795 { 00:10:13.795 "code": -32602, 00:10:13.795 "message": "Invalid parameters" 00:10:13.795 } 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:13.796 Adding namespace failed - expected result. 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:13.796 test case2: host connect to nvmf target in multiple paths 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.796 [2024-07-15 19:00:40.889096] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.796 19:00:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.796 19:00:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:14.054 19:00:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.055 19:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:14.055 19:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.055 19:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:14.055 19:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:15.970 19:00:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.970 [global] 00:10:15.970 thread=1 00:10:15.970 invalidate=1 00:10:15.970 rw=write 00:10:15.970 time_based=1 00:10:15.970 runtime=1 00:10:15.970 ioengine=libaio 00:10:15.970 direct=1 00:10:15.970 bs=4096 00:10:15.970 iodepth=1 00:10:15.970 norandommap=0 00:10:15.970 numjobs=1 00:10:15.970 00:10:15.970 verify_dump=1 00:10:15.970 verify_backlog=512 00:10:15.970 verify_state_save=0 00:10:15.970 do_verify=1 00:10:15.970 verify=crc32c-intel 00:10:15.970 [job0] 00:10:15.970 filename=/dev/nvme0n1 00:10:15.970 Could not set queue depth (nvme0n1) 00:10:16.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.229 fio-3.35 00:10:16.229 Starting 1 thread 00:10:17.604 00:10:17.604 job0: (groupid=0, jobs=1): err= 0: pid=68279: Mon Jul 15 19:00:44 2024 00:10:17.604 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(9.98MiB/1001msec) 00:10:17.604 slat (nsec): min=11666, max=94217, avg=17033.04, stdev=5939.91 00:10:17.604 clat (usec): min=138, max=391, avg=216.66, stdev=30.41 00:10:17.604 lat (usec): min=151, max=421, avg=233.69, stdev=31.49 00:10:17.604 clat percentiles (usec): 00:10:17.604 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 192], 00:10:17.604 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:10:17.604 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 269], 00:10:17.604 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 351], 00:10:17.604 | 99.99th=[ 392] 00:10:17.604 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:17.604 slat (usec): min=16, max=100, avg=23.91, stdev= 7.61 00:10:17.604 clat (usec): min=86, max=430, avg=129.34, stdev=25.10 00:10:17.604 lat (usec): min=107, max=470, avg=153.25, stdev=26.91 00:10:17.604 clat percentiles (usec): 00:10:17.604 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 101], 20.00th=[ 109], 00:10:17.604 | 30.00th=[ 115], 40.00th=[ 121], 50.00th=[ 127], 60.00th=[ 133], 00:10:17.604 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 174], 00:10:17.604 | 99.00th=[ 196], 99.50th=[ 208], 99.90th=[ 330], 99.95th=[ 330], 00:10:17.604 | 99.99th=[ 433] 00:10:17.604 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:17.604 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:17.604 lat (usec) : 100=4.77%, 250=88.58%, 500=6.65% 00:10:17.604 cpu : usr=1.20%, sys=9.10%, ctx=5124, majf=0, minf=2 00:10:17.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.604 issued rwts: total=2555,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.604 00:10:17.604 Run status group 0 (all jobs): 00:10:17.604 READ: bw=9.97MiB/s (10.5MB/s), 9.97MiB/s-9.97MiB/s (10.5MB/s-10.5MB/s), io=9.98MiB (10.5MB), run=1001-1001msec 00:10:17.604 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:17.604 00:10:17.604 Disk stats (read/write): 00:10:17.604 nvme0n1: ios=2178/2560, merge=0/0, ticks=522/371, in_queue=893, util=91.68% 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.604 rmmod nvme_tcp 00:10:17.604 rmmod nvme_fabrics 00:10:17.604 rmmod nvme_keyring 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68187 ']' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68187 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68187 ']' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68187 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68187 00:10:17.604 killing process with pid 68187 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68187' 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68187 00:10:17.604 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68187 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:17.863 00:10:17.863 real 0m5.866s 00:10:17.863 user 0m18.923s 00:10:17.863 sys 0m2.009s 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.863 19:00:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.863 ************************************ 00:10:17.863 END TEST nvmf_nmic 00:10:17.863 ************************************ 00:10:17.863 19:00:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:17.863 19:00:45 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.863 19:00:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:17.863 19:00:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.863 19:00:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.863 ************************************ 00:10:17.863 START TEST nvmf_fio_target 00:10:17.863 ************************************ 00:10:17.863 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.863 * Looking for test storage... 00:10:17.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.863 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.863 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:17.863 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.863 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:17.864 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:18.122 Cannot find device "nvmf_tgt_br" 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.122 Cannot find device "nvmf_tgt_br2" 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:18.122 Cannot find device "nvmf_tgt_br" 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:18.122 Cannot find device "nvmf_tgt_br2" 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.122 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:18.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:18.381 00:10:18.381 --- 10.0.0.2 ping statistics --- 00:10:18.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.381 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:18.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:18.381 00:10:18.381 --- 10.0.0.3 ping statistics --- 00:10:18.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.381 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:18.381 00:10:18.381 --- 10.0.0.1 ping statistics --- 00:10:18.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.381 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68457 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68457 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68457 ']' 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.381 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.382 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.382 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.382 19:00:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.382 [2024-07-15 19:00:45.519769] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:18.382 [2024-07-15 19:00:45.519868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.382 [2024-07-15 19:00:45.655679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.641 [2024-07-15 19:00:45.776949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.641 [2024-07-15 19:00:45.777010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.641 [2024-07-15 19:00:45.777036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.641 [2024-07-15 19:00:45.777043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.641 [2024-07-15 19:00:45.777050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.641 [2024-07-15 19:00:45.777205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.641 [2024-07-15 19:00:45.777402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.641 [2024-07-15 19:00:45.778144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.641 [2024-07-15 19:00:45.778147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.641 [2024-07-15 19:00:45.831930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.208 19:00:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.468 [2024-07-15 19:00:46.665430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.468 19:00:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.728 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:19.728 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.292 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:20.292 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.292 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:20.292 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.859 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:20.859 19:00:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:20.859 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.118 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:21.118 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.376 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:21.376 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.633 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:21.633 19:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:21.891 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.149 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.149 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.406 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.406 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.664 19:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.923 [2024-07-15 19:00:50.087537] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.923 19:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:23.181 19:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:23.438 19:00:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:26.010 19:00:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.010 [global] 00:10:26.010 thread=1 00:10:26.010 invalidate=1 00:10:26.010 rw=write 00:10:26.010 time_based=1 00:10:26.010 runtime=1 00:10:26.010 ioengine=libaio 00:10:26.010 direct=1 00:10:26.010 bs=4096 00:10:26.010 iodepth=1 00:10:26.010 norandommap=0 00:10:26.010 numjobs=1 00:10:26.010 00:10:26.010 verify_dump=1 00:10:26.010 verify_backlog=512 00:10:26.010 verify_state_save=0 00:10:26.010 do_verify=1 00:10:26.010 verify=crc32c-intel 00:10:26.010 [job0] 00:10:26.010 filename=/dev/nvme0n1 00:10:26.010 [job1] 00:10:26.010 filename=/dev/nvme0n2 00:10:26.010 [job2] 00:10:26.010 filename=/dev/nvme0n3 00:10:26.010 [job3] 00:10:26.010 filename=/dev/nvme0n4 00:10:26.010 Could not set queue depth (nvme0n1) 00:10:26.010 Could not set queue depth (nvme0n2) 00:10:26.010 Could not set queue depth (nvme0n3) 00:10:26.010 Could not set queue depth (nvme0n4) 00:10:26.010 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.010 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.010 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.010 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.010 fio-3.35 00:10:26.010 Starting 4 threads 00:10:26.945 00:10:26.945 job0: (groupid=0, jobs=1): err= 0: pid=68641: Mon Jul 15 19:00:54 2024 00:10:26.945 read: IOPS=2368, BW=9475KiB/s (9702kB/s)(9484KiB/1001msec) 00:10:26.945 slat (nsec): min=11409, max=49522, avg=14561.46, stdev=3593.77 00:10:26.945 clat (usec): min=140, max=2780, avg=208.71, stdev=62.25 00:10:26.945 lat (usec): min=153, max=2799, avg=223.27, stdev=62.63 00:10:26.945 clat percentiles (usec): 00:10:26.945 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 184], 00:10:26.945 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:10:26.945 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 258], 00:10:26.945 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 490], 99.95th=[ 873], 00:10:26.945 | 99.99th=[ 2769] 00:10:26.945 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:26.945 slat (usec): min=14, max=132, avg=22.13, stdev= 5.84 00:10:26.945 clat (usec): min=93, max=307, avg=158.01, stdev=30.01 00:10:26.945 lat (usec): min=112, max=398, avg=180.14, stdev=31.73 00:10:26.945 clat percentiles (usec): 00:10:26.945 | 1.00th=[ 105], 5.00th=[ 117], 10.00th=[ 123], 20.00th=[ 133], 00:10:26.945 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 155], 60.00th=[ 163], 00:10:26.945 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 215], 00:10:26.945 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 281], 99.95th=[ 285], 00:10:26.945 | 99.99th=[ 310] 00:10:26.945 bw ( KiB/s): min=11064, max=11064, per=31.19%, avg=11064.00, stdev= 0.00, samples=1 00:10:26.945 iops : min= 2766, max= 2766, avg=2766.00, stdev= 0.00, samples=1 00:10:26.945 lat (usec) : 100=0.10%, 250=95.74%, 500=4.12%, 1000=0.02% 00:10:26.945 lat (msec) : 4=0.02% 00:10:26.945 cpu : usr=2.00%, sys=7.30%, ctx=4931, majf=0, minf=5 00:10:26.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.945 issued rwts: total=2371,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.945 job1: (groupid=0, jobs=1): err= 0: pid=68642: Mon Jul 15 19:00:54 2024 00:10:26.945 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.945 slat (nsec): min=12607, max=57592, avg=17152.87, stdev=4591.00 00:10:26.945 clat (usec): min=184, max=704, avg=248.44, stdev=38.24 00:10:26.945 lat (usec): min=198, max=731, avg=265.59, stdev=39.52 00:10:26.945 clat percentiles (usec): 00:10:26.945 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 219], 00:10:26.945 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 251], 00:10:26.945 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:10:26.945 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 529], 99.95th=[ 570], 00:10:26.945 | 99.99th=[ 701] 00:10:26.945 write: IOPS=2082, BW=8332KiB/s (8532kB/s)(8340KiB/1001msec); 0 zone resets 00:10:26.945 slat (usec): min=16, max=149, avg=26.54, stdev= 7.39 00:10:26.945 clat (usec): min=123, max=1917, avg=188.28, stdev=49.52 00:10:26.945 lat (usec): min=143, max=1945, avg=214.82, stdev=51.16 00:10:26.945 clat percentiles (usec): 00:10:26.945 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 163], 00:10:26.945 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:10:26.945 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 245], 00:10:26.945 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 408], 99.95th=[ 709], 00:10:26.945 | 99.99th=[ 1926] 00:10:26.945 bw ( KiB/s): min= 8192, max= 8192, per=23.09%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.945 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.945 lat (usec) : 250=78.01%, 500=21.85%, 750=0.12% 00:10:26.945 lat (msec) : 2=0.02% 00:10:26.945 cpu : usr=2.30%, sys=6.50%, ctx=4137, majf=0, minf=8 00:10:26.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.945 issued rwts: total=2048,2085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.945 job2: (groupid=0, jobs=1): err= 0: pid=68643: Mon Jul 15 19:00:54 2024 00:10:26.945 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.945 slat (usec): min=12, max=158, avg=17.03, stdev= 5.64 00:10:26.945 clat (usec): min=158, max=664, avg=241.02, stdev=41.56 00:10:26.945 lat (usec): min=172, max=684, avg=258.05, stdev=42.63 00:10:26.945 clat percentiles (usec): 00:10:26.945 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 206], 00:10:26.946 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:10:26.946 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 322], 00:10:26.946 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 437], 99.95th=[ 578], 00:10:26.946 | 99.99th=[ 668] 00:10:26.946 write: IOPS=2109, BW=8440KiB/s (8642kB/s)(8448KiB/1001msec); 0 zone resets 00:10:26.946 slat (usec): min=15, max=156, avg=25.15, stdev= 6.50 00:10:26.946 clat (usec): min=114, max=617, avg=194.10, stdev=38.28 00:10:26.946 lat (usec): min=132, max=641, avg=219.25, stdev=39.41 00:10:26.946 clat percentiles (usec): 00:10:26.946 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 161], 00:10:26.946 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 200], 00:10:26.946 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 258], 00:10:26.946 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 441], 99.95th=[ 562], 00:10:26.946 | 99.99th=[ 619] 00:10:26.946 bw ( KiB/s): min= 8192, max= 8192, per=23.09%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.946 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.946 lat (usec) : 250=78.75%, 500=21.15%, 750=0.10% 00:10:26.946 cpu : usr=1.30%, sys=7.40%, ctx=4161, majf=0, minf=17 00:10:26.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.946 issued rwts: total=2048,2112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.946 job3: (groupid=0, jobs=1): err= 0: pid=68644: Mon Jul 15 19:00:54 2024 00:10:26.946 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.946 slat (nsec): min=13092, max=52524, avg=17176.35, stdev=4188.17 00:10:26.946 clat (usec): min=164, max=2149, avg=240.36, stdev=58.05 00:10:26.946 lat (usec): min=178, max=2166, avg=257.54, stdev=58.47 00:10:26.946 clat percentiles (usec): 00:10:26.946 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:26.946 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 245], 00:10:26.946 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 318], 00:10:26.946 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 404], 99.95th=[ 429], 00:10:26.946 | 99.99th=[ 2147] 00:10:26.946 write: IOPS=2118, BW=8476KiB/s (8679kB/s)(8484KiB/1001msec); 0 zone resets 00:10:26.946 slat (usec): min=18, max=132, avg=25.73, stdev= 6.68 00:10:26.946 clat (usec): min=105, max=329, avg=192.86, stdev=36.12 00:10:26.946 lat (usec): min=130, max=408, avg=218.59, stdev=37.15 00:10:26.946 clat percentiles (usec): 00:10:26.946 | 1.00th=[ 126], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 161], 00:10:26.946 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 202], 00:10:26.946 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 258], 00:10:26.946 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 326], 00:10:26.946 | 99.99th=[ 330] 00:10:26.946 bw ( KiB/s): min= 8192, max= 8192, per=23.09%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.946 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.946 lat (usec) : 250=79.59%, 500=20.39% 00:10:26.946 lat (msec) : 4=0.02% 00:10:26.946 cpu : usr=2.60%, sys=6.40%, ctx=4169, majf=0, minf=5 00:10:26.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.946 issued rwts: total=2048,2121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.946 00:10:26.946 Run status group 0 (all jobs): 00:10:26.946 READ: bw=33.2MiB/s (34.8MB/s), 8184KiB/s-9475KiB/s (8380kB/s-9702kB/s), io=33.3MiB (34.9MB), run=1001-1001msec 00:10:26.946 WRITE: bw=34.6MiB/s (36.3MB/s), 8332KiB/s-9.99MiB/s (8532kB/s-10.5MB/s), io=34.7MiB (36.4MB), run=1001-1001msec 00:10:26.946 00:10:26.946 Disk stats (read/write): 00:10:26.946 nvme0n1: ios=2098/2149, merge=0/0, ticks=486/362, in_queue=848, util=88.78% 00:10:26.946 nvme0n2: ios=1608/2048, merge=0/0, ticks=434/407, in_queue=841, util=89.78% 00:10:26.946 nvme0n3: ios=1570/2048, merge=0/0, ticks=407/416, in_queue=823, util=89.45% 00:10:26.946 nvme0n4: ios=1554/2048, merge=0/0, ticks=377/427, in_queue=804, util=89.69% 00:10:26.946 19:00:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.946 [global] 00:10:26.946 thread=1 00:10:26.946 invalidate=1 00:10:26.946 rw=randwrite 00:10:26.946 time_based=1 00:10:26.946 runtime=1 00:10:26.946 ioengine=libaio 00:10:26.946 direct=1 00:10:26.946 bs=4096 00:10:26.946 iodepth=1 00:10:26.946 norandommap=0 00:10:26.946 numjobs=1 00:10:26.946 00:10:26.946 verify_dump=1 00:10:26.946 verify_backlog=512 00:10:26.946 verify_state_save=0 00:10:26.946 do_verify=1 00:10:26.946 verify=crc32c-intel 00:10:26.946 [job0] 00:10:26.946 filename=/dev/nvme0n1 00:10:26.946 [job1] 00:10:26.946 filename=/dev/nvme0n2 00:10:26.946 [job2] 00:10:26.946 filename=/dev/nvme0n3 00:10:26.946 [job3] 00:10:26.946 filename=/dev/nvme0n4 00:10:26.946 Could not set queue depth (nvme0n1) 00:10:26.946 Could not set queue depth (nvme0n2) 00:10:26.946 Could not set queue depth (nvme0n3) 00:10:26.946 Could not set queue depth (nvme0n4) 00:10:27.204 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.204 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.204 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.204 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.204 fio-3.35 00:10:27.204 Starting 4 threads 00:10:28.581 00:10:28.581 job0: (groupid=0, jobs=1): err= 0: pid=68703: Mon Jul 15 19:00:55 2024 00:10:28.581 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:28.581 slat (nsec): min=11330, max=61654, avg=15120.82, stdev=5272.65 00:10:28.581 clat (usec): min=140, max=620, avg=233.07, stdev=60.67 00:10:28.581 lat (usec): min=156, max=639, avg=248.19, stdev=61.42 00:10:28.581 clat percentiles (usec): 00:10:28.581 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:10:28.581 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 231], 00:10:28.581 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 338], 00:10:28.581 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 619], 00:10:28.581 | 99.99th=[ 619] 00:10:28.581 write: IOPS=2327, BW=9311KiB/s (9534kB/s)(9320KiB/1001msec); 0 zone resets 00:10:28.581 slat (usec): min=12, max=103, avg=22.85, stdev= 8.47 00:10:28.581 clat (usec): min=94, max=516, avg=184.59, stdev=51.02 00:10:28.581 lat (usec): min=113, max=538, avg=207.44, stdev=53.34 00:10:28.581 clat percentiles (usec): 00:10:28.581 | 1.00th=[ 109], 5.00th=[ 122], 10.00th=[ 133], 20.00th=[ 147], 00:10:28.581 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 186], 00:10:28.581 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 249], 95.00th=[ 285], 00:10:28.581 | 99.00th=[ 355], 99.50th=[ 429], 99.90th=[ 490], 99.95th=[ 498], 00:10:28.581 | 99.99th=[ 519] 00:10:28.581 bw ( KiB/s): min=10936, max=10936, per=36.73%, avg=10936.00, stdev= 0.00, samples=1 00:10:28.581 iops : min= 2734, max= 2734, avg=2734.00, stdev= 0.00, samples=1 00:10:28.581 lat (usec) : 100=0.16%, 250=83.94%, 500=15.28%, 750=0.62% 00:10:28.581 cpu : usr=2.00%, sys=6.40%, ctx=4381, majf=0, minf=13 00:10:28.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.581 issued rwts: total=2048,2330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.581 job1: (groupid=0, jobs=1): err= 0: pid=68704: Mon Jul 15 19:00:55 2024 00:10:28.581 read: IOPS=2016, BW=8068KiB/s (8262kB/s)(8076KiB/1001msec) 00:10:28.581 slat (nsec): min=12302, max=69353, avg=18354.38, stdev=6650.81 00:10:28.581 clat (usec): min=170, max=452, avg=256.08, stdev=42.86 00:10:28.581 lat (usec): min=188, max=464, avg=274.43, stdev=43.79 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 221], 00:10:28.582 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 260], 00:10:28.582 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 343], 00:10:28.582 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 420], 99.95th=[ 433], 00:10:28.582 | 99.99th=[ 453] 00:10:28.582 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:28.582 slat (usec): min=15, max=161, avg=27.86, stdev=11.77 00:10:28.582 clat (usec): min=2, max=2130, avg=185.62, stdev=58.50 00:10:28.582 lat (usec): min=129, max=2157, avg=213.48, stdev=60.88 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 120], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 153], 00:10:28.582 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 190], 00:10:28.582 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 235], 95.00th=[ 255], 00:10:28.582 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 486], 99.95th=[ 676], 00:10:28.582 | 99.99th=[ 2147] 00:10:28.582 bw ( KiB/s): min= 8192, max= 8192, per=27.52%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.582 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.582 lat (usec) : 4=0.02%, 100=0.02%, 250=72.83%, 500=27.07%, 750=0.02% 00:10:28.582 lat (msec) : 4=0.02% 00:10:28.582 cpu : usr=1.80%, sys=7.40%, ctx=4081, majf=0, minf=9 00:10:28.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 issued rwts: total=2019,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.582 job2: (groupid=0, jobs=1): err= 0: pid=68705: Mon Jul 15 19:00:55 2024 00:10:28.582 read: IOPS=1266, BW=5067KiB/s (5189kB/s)(5072KiB/1001msec) 00:10:28.582 slat (nsec): min=8750, max=71264, avg=20797.15, stdev=7300.11 00:10:28.582 clat (usec): min=224, max=632, avg=360.92, stdev=65.82 00:10:28.582 lat (usec): min=242, max=647, avg=381.71, stdev=66.77 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 251], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 306], 00:10:28.582 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 371], 00:10:28.582 | 70.00th=[ 388], 80.00th=[ 412], 90.00th=[ 457], 95.00th=[ 486], 00:10:28.582 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 635], 00:10:28.582 | 99.99th=[ 635] 00:10:28.582 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:28.582 slat (nsec): min=15115, max=97815, avg=29445.47, stdev=10565.57 00:10:28.582 clat (usec): min=111, max=8014, avg=301.80, stdev=267.58 00:10:28.582 lat (usec): min=135, max=8053, avg=331.24, stdev=268.25 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 139], 5.00th=[ 159], 10.00th=[ 172], 20.00th=[ 196], 00:10:28.582 | 30.00th=[ 225], 40.00th=[ 251], 50.00th=[ 277], 60.00th=[ 302], 00:10:28.582 | 70.00th=[ 326], 80.00th=[ 367], 90.00th=[ 437], 95.00th=[ 469], 00:10:28.582 | 99.00th=[ 627], 99.50th=[ 1029], 99.90th=[ 3556], 99.95th=[ 8029], 00:10:28.582 | 99.99th=[ 8029] 00:10:28.582 bw ( KiB/s): min= 7768, max= 7768, per=26.09%, avg=7768.00, stdev= 0.00, samples=1 00:10:28.582 iops : min= 1942, max= 1942, avg=1942.00, stdev= 0.00, samples=1 00:10:28.582 lat (usec) : 250=21.83%, 500=74.71%, 750=3.17% 00:10:28.582 lat (msec) : 2=0.14%, 4=0.11%, 10=0.04% 00:10:28.582 cpu : usr=1.50%, sys=5.90%, ctx=2806, majf=0, minf=9 00:10:28.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 issued rwts: total=1268,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.582 job3: (groupid=0, jobs=1): err= 0: pid=68706: Mon Jul 15 19:00:55 2024 00:10:28.582 read: IOPS=1427, BW=5710KiB/s (5847kB/s)(5716KiB/1001msec) 00:10:28.582 slat (nsec): min=8475, max=68816, avg=16735.28, stdev=7106.34 00:10:28.582 clat (usec): min=177, max=661, avg=347.00, stdev=71.06 00:10:28.582 lat (usec): min=191, max=684, avg=363.73, stdev=70.90 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 202], 5.00th=[ 241], 10.00th=[ 262], 20.00th=[ 289], 00:10:28.582 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 359], 00:10:28.582 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 474], 00:10:28.582 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 660], 99.95th=[ 660], 00:10:28.582 | 99.99th=[ 660] 00:10:28.582 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:28.582 slat (usec): min=13, max=319, avg=31.36, stdev=13.59 00:10:28.582 clat (usec): min=106, max=3298, avg=276.66, stdev=133.34 00:10:28.582 lat (usec): min=124, max=3322, avg=308.02, stdev=137.32 00:10:28.582 clat percentiles (usec): 00:10:28.582 | 1.00th=[ 133], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 188], 00:10:28.582 | 30.00th=[ 208], 40.00th=[ 241], 50.00th=[ 265], 60.00th=[ 289], 00:10:28.582 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 404], 95.00th=[ 437], 00:10:28.582 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 2180], 99.95th=[ 3294], 00:10:28.582 | 99.99th=[ 3294] 00:10:28.582 bw ( KiB/s): min= 8192, max= 8192, per=27.52%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.582 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.582 lat (usec) : 250=25.94%, 500=71.64%, 750=2.29%, 1000=0.03% 00:10:28.582 lat (msec) : 2=0.03%, 4=0.07% 00:10:28.582 cpu : usr=1.90%, sys=5.50%, ctx=2966, majf=0, minf=16 00:10:28.582 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.582 issued rwts: total=1429,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.582 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.582 00:10:28.582 Run status group 0 (all jobs): 00:10:28.582 READ: bw=26.4MiB/s (27.7MB/s), 5067KiB/s-8184KiB/s (5189kB/s-8380kB/s), io=26.4MiB (27.7MB), run=1001-1001msec 00:10:28.582 WRITE: bw=29.1MiB/s (30.5MB/s), 6138KiB/s-9311KiB/s (6285kB/s-9534kB/s), io=29.1MiB (30.5MB), run=1001-1001msec 00:10:28.582 00:10:28.582 Disk stats (read/write): 00:10:28.582 nvme0n1: ios=1964/2048, merge=0/0, ticks=498/361, in_queue=859, util=89.18% 00:10:28.582 nvme0n2: ios=1612/2048, merge=0/0, ticks=443/404, in_queue=847, util=89.78% 00:10:28.582 nvme0n3: ios=1052/1397, merge=0/0, ticks=414/387, in_queue=801, util=88.66% 00:10:28.582 nvme0n4: ios=1024/1498, merge=0/0, ticks=336/416, in_queue=752, util=89.41% 00:10:28.582 19:00:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:28.582 [global] 00:10:28.582 thread=1 00:10:28.582 invalidate=1 00:10:28.582 rw=write 00:10:28.582 time_based=1 00:10:28.582 runtime=1 00:10:28.582 ioengine=libaio 00:10:28.582 direct=1 00:10:28.582 bs=4096 00:10:28.582 iodepth=128 00:10:28.582 norandommap=0 00:10:28.582 numjobs=1 00:10:28.582 00:10:28.582 verify_dump=1 00:10:28.582 verify_backlog=512 00:10:28.582 verify_state_save=0 00:10:28.582 do_verify=1 00:10:28.582 verify=crc32c-intel 00:10:28.582 [job0] 00:10:28.582 filename=/dev/nvme0n1 00:10:28.582 [job1] 00:10:28.582 filename=/dev/nvme0n2 00:10:28.582 [job2] 00:10:28.582 filename=/dev/nvme0n3 00:10:28.582 [job3] 00:10:28.582 filename=/dev/nvme0n4 00:10:28.582 Could not set queue depth (nvme0n1) 00:10:28.582 Could not set queue depth (nvme0n2) 00:10:28.582 Could not set queue depth (nvme0n3) 00:10:28.582 Could not set queue depth (nvme0n4) 00:10:28.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.582 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.582 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.582 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.582 fio-3.35 00:10:28.582 Starting 4 threads 00:10:29.957 00:10:29.957 job0: (groupid=0, jobs=1): err= 0: pid=68765: Mon Jul 15 19:00:56 2024 00:10:29.957 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:29.957 slat (usec): min=5, max=6923, avg=147.19, stdev=729.79 00:10:29.957 clat (usec): min=13291, max=25294, avg=19768.74, stdev=1517.45 00:10:29.957 lat (usec): min=16541, max=25336, avg=19915.93, stdev=1329.98 00:10:29.957 clat percentiles (usec): 00:10:29.957 | 1.00th=[15139], 5.00th=[17695], 10.00th=[18220], 20.00th=[18744], 00:10:29.957 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:10:29.957 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21365], 95.00th=[22414], 00:10:29.957 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:10:29.957 | 99.99th=[25297] 00:10:29.957 write: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1003msec); 0 zone resets 00:10:29.957 slat (usec): min=10, max=5811, avg=147.59, stdev=678.89 00:10:29.957 clat (usec): min=455, max=23808, avg=18696.87, stdev=2207.23 00:10:29.957 lat (usec): min=4610, max=23845, avg=18844.46, stdev=2111.52 00:10:29.957 clat percentiles (usec): 00:10:29.957 | 1.00th=[10159], 5.00th=[16057], 10.00th=[16909], 20.00th=[17695], 00:10:29.958 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[19268], 00:10:29.958 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20579], 95.00th=[22152], 00:10:29.958 | 99.00th=[22938], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:10:29.958 | 99.99th=[23725] 00:10:29.958 bw ( KiB/s): min=13064, max=14080, per=25.99%, avg=13572.00, stdev=718.42, samples=2 00:10:29.958 iops : min= 3266, max= 3520, avg=3393.00, stdev=179.61, samples=2 00:10:29.958 lat (usec) : 500=0.02% 00:10:29.958 lat (msec) : 10=0.49%, 20=71.15%, 50=28.35% 00:10:29.958 cpu : usr=3.09%, sys=11.18%, ctx=208, majf=0, minf=3 00:10:29.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:29.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.958 issued rwts: total=3072,3521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.958 job1: (groupid=0, jobs=1): err= 0: pid=68766: Mon Jul 15 19:00:56 2024 00:10:29.958 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:29.958 slat (usec): min=7, max=5413, avg=147.59, stdev=729.60 00:10:29.958 clat (usec): min=13894, max=23197, avg=19778.95, stdev=1165.14 00:10:29.958 lat (usec): min=16811, max=23233, avg=19926.54, stdev=909.81 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[15401], 5.00th=[18220], 10.00th=[18482], 20.00th=[19268], 00:10:29.958 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:10:29.958 | 70.00th=[20317], 80.00th=[20317], 90.00th=[20841], 95.00th=[21365], 00:10:29.958 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:10:29.958 | 99.99th=[23200] 00:10:29.958 write: IOPS=3446, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1003msec); 0 zone resets 00:10:29.958 slat (usec): min=10, max=5473, avg=149.97, stdev=693.55 00:10:29.958 clat (usec): min=508, max=24199, avg=18976.50, stdev=2252.08 00:10:29.958 lat (usec): min=5850, max=24226, avg=19126.48, stdev=2149.91 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[11600], 5.00th=[16057], 10.00th=[17171], 20.00th=[17695], 00:10:29.958 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:10:29.958 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21890], 95.00th=[22676], 00:10:29.958 | 99.00th=[23462], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:10:29.958 | 99.99th=[24249] 00:10:29.958 bw ( KiB/s): min=12833, max=13824, per=25.52%, avg=13328.50, stdev=700.74, samples=2 00:10:29.958 iops : min= 3208, max= 3456, avg=3332.00, stdev=175.36, samples=2 00:10:29.958 lat (usec) : 750=0.02% 00:10:29.958 lat (msec) : 10=0.49%, 20=67.39%, 50=32.10% 00:10:29.958 cpu : usr=2.99%, sys=10.98%, ctx=205, majf=0, minf=15 00:10:29.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:29.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.958 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.958 job2: (groupid=0, jobs=1): err= 0: pid=68767: Mon Jul 15 19:00:56 2024 00:10:29.958 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1002msec) 00:10:29.958 slat (usec): min=5, max=6451, avg=175.00, stdev=873.77 00:10:29.958 clat (usec): min=382, max=26976, avg=22373.51, stdev=3091.21 00:10:29.958 lat (usec): min=5665, max=26994, avg=22548.52, stdev=2976.57 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[ 6194], 5.00th=[18220], 10.00th=[19530], 20.00th=[20317], 00:10:29.958 | 30.00th=[21627], 40.00th=[22414], 50.00th=[23200], 60.00th=[23725], 00:10:29.958 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[25822], 00:10:29.958 | 99.00th=[26870], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:10:29.958 | 99.99th=[26870] 00:10:29.958 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:29.958 slat (usec): min=14, max=5527, avg=165.30, stdev=765.19 00:10:29.958 clat (usec): min=14423, max=25737, avg=21702.41, stdev=1744.53 00:10:29.958 lat (usec): min=17254, max=25783, avg=21867.70, stdev=1573.93 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[16909], 5.00th=[18482], 10.00th=[19268], 20.00th=[20317], 00:10:29.958 | 30.00th=[21103], 40.00th=[21627], 50.00th=[21890], 60.00th=[22414], 00:10:29.958 | 70.00th=[22938], 80.00th=[23200], 90.00th=[23725], 95.00th=[23987], 00:10:29.958 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25822], 00:10:29.958 | 99.99th=[25822] 00:10:29.958 bw ( KiB/s): min=12040, max=12312, per=23.31%, avg=12176.00, stdev=192.33, samples=2 00:10:29.958 iops : min= 3010, max= 3078, avg=3044.00, stdev=48.08, samples=2 00:10:29.958 lat (usec) : 500=0.02% 00:10:29.958 lat (msec) : 10=0.56%, 20=14.54%, 50=84.88% 00:10:29.958 cpu : usr=3.50%, sys=9.99%, ctx=180, majf=0, minf=7 00:10:29.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:29.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.958 issued rwts: total=2657,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.958 job3: (groupid=0, jobs=1): err= 0: pid=68768: Mon Jul 15 19:00:56 2024 00:10:29.958 read: IOPS=2673, BW=10.4MiB/s (11.0MB/s)(10.5MiB/1005msec) 00:10:29.958 slat (usec): min=7, max=9557, avg=178.56, stdev=816.47 00:10:29.958 clat (usec): min=852, max=35629, avg=22643.78, stdev=3202.33 00:10:29.958 lat (usec): min=7108, max=35666, avg=22822.34, stdev=3183.27 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[ 7701], 5.00th=[18482], 10.00th=[19792], 20.00th=[21365], 00:10:29.958 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:10:29.958 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25560], 95.00th=[27395], 00:10:29.958 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:10:29.958 | 99.99th=[35390] 00:10:29.958 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:29.958 slat (usec): min=13, max=9974, avg=160.75, stdev=1018.27 00:10:29.958 clat (usec): min=10757, max=34062, avg=21414.33, stdev=2629.00 00:10:29.958 lat (usec): min=10803, max=34099, avg=21575.09, stdev=2792.02 00:10:29.958 clat percentiles (usec): 00:10:29.958 | 1.00th=[14615], 5.00th=[17695], 10.00th=[19006], 20.00th=[19530], 00:10:29.958 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21890], 00:10:29.958 | 70.00th=[22152], 80.00th=[22938], 90.00th=[23462], 95.00th=[26608], 00:10:29.958 | 99.00th=[30278], 99.50th=[30540], 99.90th=[32113], 99.95th=[32900], 00:10:29.958 | 99.99th=[33817] 00:10:29.958 bw ( KiB/s): min=12280, max=12312, per=23.54%, avg=12296.00, stdev=22.63, samples=2 00:10:29.958 iops : min= 3070, max= 3078, avg=3074.00, stdev= 5.66, samples=2 00:10:29.958 lat (usec) : 1000=0.02% 00:10:29.958 lat (msec) : 10=0.73%, 20=18.81%, 50=80.45% 00:10:29.958 cpu : usr=3.59%, sys=9.96%, ctx=177, majf=0, minf=10 00:10:29.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:29.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.958 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.958 00:10:29.958 Run status group 0 (all jobs): 00:10:29.958 READ: bw=44.7MiB/s (46.8MB/s), 10.4MiB/s-12.0MiB/s (10.9MB/s-12.5MB/s), io=44.9MiB (47.1MB), run=1002-1005msec 00:10:29.958 WRITE: bw=51.0MiB/s (53.5MB/s), 11.9MiB/s-13.7MiB/s (12.5MB/s-14.4MB/s), io=51.3MiB (53.7MB), run=1002-1005msec 00:10:29.958 00:10:29.958 Disk stats (read/write): 00:10:29.958 nvme0n1: ios=2706/3072, merge=0/0, ticks=11839/12904, in_queue=24743, util=89.48% 00:10:29.958 nvme0n2: ios=2673/3072, merge=0/0, ticks=11965/13199, in_queue=25164, util=90.92% 00:10:29.958 nvme0n3: ios=2471/2560, merge=0/0, ticks=12972/12184, in_queue=25156, util=91.38% 00:10:29.958 nvme0n4: ios=2383/2560, merge=0/0, ticks=27042/23664, in_queue=50706, util=89.78% 00:10:29.958 19:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.958 [global] 00:10:29.958 thread=1 00:10:29.958 invalidate=1 00:10:29.958 rw=randwrite 00:10:29.958 time_based=1 00:10:29.958 runtime=1 00:10:29.958 ioengine=libaio 00:10:29.958 direct=1 00:10:29.958 bs=4096 00:10:29.958 iodepth=128 00:10:29.958 norandommap=0 00:10:29.958 numjobs=1 00:10:29.958 00:10:29.958 verify_dump=1 00:10:29.958 verify_backlog=512 00:10:29.958 verify_state_save=0 00:10:29.958 do_verify=1 00:10:29.958 verify=crc32c-intel 00:10:29.958 [job0] 00:10:29.958 filename=/dev/nvme0n1 00:10:29.958 [job1] 00:10:29.958 filename=/dev/nvme0n2 00:10:29.958 [job2] 00:10:29.958 filename=/dev/nvme0n3 00:10:29.958 [job3] 00:10:29.958 filename=/dev/nvme0n4 00:10:29.958 Could not set queue depth (nvme0n1) 00:10:29.958 Could not set queue depth (nvme0n2) 00:10:29.958 Could not set queue depth (nvme0n3) 00:10:29.958 Could not set queue depth (nvme0n4) 00:10:29.958 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.958 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.958 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.958 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.958 fio-3.35 00:10:29.958 Starting 4 threads 00:10:31.379 00:10:31.379 job0: (groupid=0, jobs=1): err= 0: pid=68822: Mon Jul 15 19:00:58 2024 00:10:31.379 read: IOPS=1499, BW=5996KiB/s (6140kB/s)(6032KiB/1006msec) 00:10:31.379 slat (usec): min=8, max=18449, avg=301.23, stdev=1408.31 00:10:31.379 clat (usec): min=896, max=88469, avg=37014.69, stdev=14542.87 00:10:31.379 lat (usec): min=11772, max=88495, avg=37315.92, stdev=14673.73 00:10:31.379 clat percentiles (usec): 00:10:31.379 | 1.00th=[12387], 5.00th=[27132], 10.00th=[28443], 20.00th=[29230], 00:10:31.379 | 30.00th=[30016], 40.00th=[30802], 50.00th=[31327], 60.00th=[32900], 00:10:31.379 | 70.00th=[35390], 80.00th=[41681], 90.00th=[58459], 95.00th=[73925], 00:10:31.379 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[88605], 00:10:31.379 | 99.99th=[88605] 00:10:31.379 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:10:31.379 slat (usec): min=16, max=11946, avg=346.29, stdev=1308.04 00:10:31.379 clat (usec): min=20689, max=94127, avg=46060.10, stdev=20012.71 00:10:31.379 lat (usec): min=20713, max=94174, avg=46406.39, stdev=20137.61 00:10:31.379 clat percentiles (usec): 00:10:31.379 | 1.00th=[22676], 5.00th=[25035], 10.00th=[29492], 20.00th=[31851], 00:10:31.379 | 30.00th=[33162], 40.00th=[33817], 50.00th=[36963], 60.00th=[38011], 00:10:31.379 | 70.00th=[55313], 80.00th=[67634], 90.00th=[79168], 95.00th=[89654], 00:10:31.379 | 99.00th=[93848], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:10:31.379 | 99.99th=[93848] 00:10:31.379 bw ( KiB/s): min= 5892, max= 6384, per=18.55%, avg=6138.00, stdev=347.90, samples=2 00:10:31.379 iops : min= 1473, max= 1596, avg=1534.50, stdev=86.97, samples=2 00:10:31.379 lat (usec) : 1000=0.03% 00:10:31.379 lat (msec) : 20=2.10%, 50=73.49%, 100=24.38% 00:10:31.379 cpu : usr=1.59%, sys=6.57%, ctx=212, majf=0, minf=11 00:10:31.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:10:31.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.379 issued rwts: total=1508,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.379 job1: (groupid=0, jobs=1): err= 0: pid=68823: Mon Jul 15 19:00:58 2024 00:10:31.379 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:31.380 slat (usec): min=8, max=11843, avg=242.15, stdev=1274.86 00:10:31.380 clat (usec): min=18488, max=38733, avg=30680.84, stdev=3325.80 00:10:31.380 lat (usec): min=24592, max=38745, avg=30922.98, stdev=3120.32 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[22676], 5.00th=[25297], 10.00th=[26608], 20.00th=[28443], 00:10:31.380 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30016], 60.00th=[30802], 00:10:31.380 | 70.00th=[32113], 80.00th=[33817], 90.00th=[35390], 95.00th=[36439], 00:10:31.380 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:10:31.380 | 99.99th=[38536] 00:10:31.380 write: IOPS=2136, BW=8546KiB/s (8751kB/s)(8580KiB/1004msec); 0 zone resets 00:10:31.380 slat (usec): min=20, max=9547, avg=226.02, stdev=1114.61 00:10:31.380 clat (usec): min=843, max=35153, avg=29556.39, stdev=4155.49 00:10:31.380 lat (usec): min=9315, max=35201, avg=29782.40, stdev=3995.75 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[10159], 5.00th=[22152], 10.00th=[25560], 20.00th=[28181], 00:10:31.380 | 30.00th=[28967], 40.00th=[29754], 50.00th=[30540], 60.00th=[30802], 00:10:31.380 | 70.00th=[31327], 80.00th=[32113], 90.00th=[33817], 95.00th=[34341], 00:10:31.380 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:10:31.380 | 99.99th=[35390] 00:10:31.380 bw ( KiB/s): min= 8192, max= 8192, per=24.76%, avg=8192.00, stdev= 0.00, samples=2 00:10:31.380 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:31.380 lat (usec) : 1000=0.02% 00:10:31.380 lat (msec) : 10=0.41%, 20=1.31%, 50=98.26% 00:10:31.380 cpu : usr=2.89%, sys=6.68%, ctx=133, majf=0, minf=9 00:10:31.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:31.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.380 issued rwts: total=2048,2145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.380 job2: (groupid=0, jobs=1): err= 0: pid=68824: Mon Jul 15 19:00:58 2024 00:10:31.380 read: IOPS=2474, BW=9897KiB/s (10.1MB/s)(9956KiB/1006msec) 00:10:31.380 slat (usec): min=7, max=15269, avg=212.43, stdev=1110.07 00:10:31.380 clat (usec): min=409, max=48283, avg=26595.00, stdev=6968.38 00:10:31.380 lat (usec): min=11720, max=48306, avg=26807.43, stdev=7017.17 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[12256], 5.00th=[17433], 10.00th=[19792], 20.00th=[20579], 00:10:31.380 | 30.00th=[21365], 40.00th=[22938], 50.00th=[26608], 60.00th=[28967], 00:10:31.380 | 70.00th=[30278], 80.00th=[31851], 90.00th=[35390], 95.00th=[39584], 00:10:31.380 | 99.00th=[44303], 99.50th=[46924], 99.90th=[48497], 99.95th=[48497], 00:10:31.380 | 99.99th=[48497] 00:10:31.380 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:10:31.380 slat (usec): min=11, max=14927, avg=176.36, stdev=986.50 00:10:31.380 clat (usec): min=9610, max=49957, avg=23513.79, stdev=7174.74 00:10:31.380 lat (usec): min=9656, max=49995, avg=23690.15, stdev=7258.64 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[12911], 5.00th=[15401], 10.00th=[15664], 20.00th=[16450], 00:10:31.380 | 30.00th=[18744], 40.00th=[20317], 50.00th=[21365], 60.00th=[24773], 00:10:31.380 | 70.00th=[26608], 80.00th=[31327], 90.00th=[33424], 95.00th=[35390], 00:10:31.380 | 99.00th=[44303], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:10:31.380 | 99.99th=[50070] 00:10:31.380 bw ( KiB/s): min= 8544, max=11936, per=30.95%, avg=10240.00, stdev=2398.51, samples=2 00:10:31.380 iops : min= 2136, max= 2984, avg=2560.00, stdev=599.63, samples=2 00:10:31.380 lat (usec) : 500=0.02% 00:10:31.380 lat (msec) : 10=0.04%, 20=25.69%, 50=74.25% 00:10:31.380 cpu : usr=1.79%, sys=9.45%, ctx=216, majf=0, minf=7 00:10:31.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:31.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.380 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.380 job3: (groupid=0, jobs=1): err= 0: pid=68825: Mon Jul 15 19:00:58 2024 00:10:31.380 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:31.380 slat (usec): min=7, max=9831, avg=232.85, stdev=1213.54 00:10:31.380 clat (usec): min=20911, max=41471, avg=31857.83, stdev=3767.79 00:10:31.380 lat (usec): min=21581, max=41497, avg=32090.68, stdev=3577.02 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[21890], 5.00th=[27395], 10.00th=[28443], 20.00th=[28967], 00:10:31.380 | 30.00th=[29754], 40.00th=[30278], 50.00th=[30802], 60.00th=[32900], 00:10:31.380 | 70.00th=[33817], 80.00th=[34866], 90.00th=[36963], 95.00th=[38536], 00:10:31.380 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:31.380 | 99.99th=[41681] 00:10:31.380 write: IOPS=2072, BW=8291KiB/s (8490kB/s)(8324KiB/1004msec); 0 zone resets 00:10:31.380 slat (usec): min=18, max=13164, avg=241.72, stdev=1198.92 00:10:31.380 clat (usec): min=887, max=33817, avg=28960.82, stdev=3259.63 00:10:31.380 lat (usec): min=9404, max=37251, avg=29202.54, stdev=3053.59 00:10:31.380 clat percentiles (usec): 00:10:31.380 | 1.00th=[10159], 5.00th=[24773], 10.00th=[26084], 20.00th=[27395], 00:10:31.380 | 30.00th=[28181], 40.00th=[28967], 50.00th=[29492], 60.00th=[30278], 00:10:31.380 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[32113], 00:10:31.380 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:10:31.380 | 99.99th=[33817] 00:10:31.380 bw ( KiB/s): min= 8192, max= 8192, per=24.76%, avg=8192.00, stdev= 0.00, samples=2 00:10:31.380 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:31.380 lat (usec) : 1000=0.02% 00:10:31.380 lat (msec) : 10=0.39%, 20=0.46%, 50=99.13% 00:10:31.380 cpu : usr=3.29%, sys=6.48%, ctx=133, majf=0, minf=10 00:10:31.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:31.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.380 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.380 00:10:31.380 Run status group 0 (all jobs): 00:10:31.380 READ: bw=31.4MiB/s (33.0MB/s), 5996KiB/s-9897KiB/s (6140kB/s-10.1MB/s), io=31.6MiB (33.1MB), run=1004-1006msec 00:10:31.380 WRITE: bw=32.3MiB/s (33.9MB/s), 6107KiB/s-9.94MiB/s (6254kB/s-10.4MB/s), io=32.5MiB (34.1MB), run=1004-1006msec 00:10:31.380 00:10:31.380 Disk stats (read/write): 00:10:31.380 nvme0n1: ios=1074/1423, merge=0/0, ticks=13826/21221, in_queue=35047, util=88.57% 00:10:31.380 nvme0n2: ios=1585/2048, merge=0/0, ticks=11862/13877, in_queue=25739, util=90.28% 00:10:31.380 nvme0n3: ios=2078/2415, merge=0/0, ticks=26849/23873, in_queue=50722, util=90.21% 00:10:31.380 nvme0n4: ios=1557/1984, merge=0/0, ticks=12008/13648, in_queue=25656, util=90.55% 00:10:31.380 19:00:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.380 19:00:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68842 00:10:31.380 19:00:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.380 19:00:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.380 [global] 00:10:31.380 thread=1 00:10:31.380 invalidate=1 00:10:31.380 rw=read 00:10:31.380 time_based=1 00:10:31.380 runtime=10 00:10:31.380 ioengine=libaio 00:10:31.380 direct=1 00:10:31.380 bs=4096 00:10:31.380 iodepth=1 00:10:31.380 norandommap=1 00:10:31.380 numjobs=1 00:10:31.380 00:10:31.380 [job0] 00:10:31.380 filename=/dev/nvme0n1 00:10:31.380 [job1] 00:10:31.380 filename=/dev/nvme0n2 00:10:31.380 [job2] 00:10:31.380 filename=/dev/nvme0n3 00:10:31.380 [job3] 00:10:31.380 filename=/dev/nvme0n4 00:10:31.380 Could not set queue depth (nvme0n1) 00:10:31.380 Could not set queue depth (nvme0n2) 00:10:31.380 Could not set queue depth (nvme0n3) 00:10:31.380 Could not set queue depth (nvme0n4) 00:10:31.380 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.380 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.380 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.380 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.380 fio-3.35 00:10:31.380 Starting 4 threads 00:10:34.664 19:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.664 fio: pid=68891, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.664 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=37511168, buflen=4096 00:10:34.664 19:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.664 fio: pid=68890, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.664 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42078208, buflen=4096 00:10:34.664 19:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.664 19:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.922 fio: pid=68888, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:34.922 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=39391232, buflen=4096 00:10:34.923 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.923 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:35.183 fio: pid=68889, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:35.183 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=44437504, buflen=4096 00:10:35.183 00:10:35.183 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68888: Mon Jul 15 19:01:02 2024 00:10:35.183 read: IOPS=2809, BW=11.0MiB/s (11.5MB/s)(37.6MiB/3423msec) 00:10:35.183 slat (usec): min=8, max=11736, avg=22.17, stdev=197.62 00:10:35.183 clat (usec): min=142, max=8065, avg=331.99, stdev=104.83 00:10:35.183 lat (usec): min=155, max=12029, avg=354.17, stdev=223.72 00:10:35.183 clat percentiles (usec): 00:10:35.183 | 1.00th=[ 229], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 281], 00:10:35.183 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 334], 00:10:35.183 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 449], 00:10:35.183 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 1106], 00:10:35.183 | 99.99th=[ 8094] 00:10:35.183 bw ( KiB/s): min=10360, max=11752, per=26.06%, avg=11226.67, stdev=526.46, samples=6 00:10:35.183 iops : min= 2590, max= 2938, avg=2806.67, stdev=131.62, samples=6 00:10:35.183 lat (usec) : 250=3.43%, 500=94.84%, 750=1.64%, 1000=0.02% 00:10:35.183 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:10:35.183 cpu : usr=1.05%, sys=4.70%, ctx=9633, majf=0, minf=1 00:10:35.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 issued rwts: total=9618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.183 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68889: Mon Jul 15 19:01:02 2024 00:10:35.183 read: IOPS=2929, BW=11.4MiB/s (12.0MB/s)(42.4MiB/3704msec) 00:10:35.183 slat (usec): min=8, max=13266, avg=22.40, stdev=222.58 00:10:35.183 clat (usec): min=3, max=1184, avg=317.31, stdev=73.49 00:10:35.183 lat (usec): min=148, max=13496, avg=339.71, stdev=234.40 00:10:35.183 clat percentiles (usec): 00:10:35.183 | 1.00th=[ 163], 5.00th=[ 192], 10.00th=[ 229], 20.00th=[ 269], 00:10:35.183 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 330], 00:10:35.183 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 441], 00:10:35.183 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 857], 00:10:35.183 | 99.99th=[ 1106] 00:10:35.183 bw ( KiB/s): min=10424, max=13215, per=26.85%, avg=11569.00, stdev=881.69, samples=7 00:10:35.183 iops : min= 2606, max= 3303, avg=2892.14, stdev=220.19, samples=7 00:10:35.183 lat (usec) : 4=0.03%, 250=13.56%, 500=84.72%, 750=1.62%, 1000=0.04% 00:10:35.183 lat (msec) : 2=0.03% 00:10:35.183 cpu : usr=0.95%, sys=4.54%, ctx=10869, majf=0, minf=1 00:10:35.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 issued rwts: total=10850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.183 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68890: Mon Jul 15 19:01:02 2024 00:10:35.183 read: IOPS=3222, BW=12.6MiB/s (13.2MB/s)(40.1MiB/3188msec) 00:10:35.183 slat (usec): min=8, max=7699, avg=17.31, stdev=91.28 00:10:35.183 clat (usec): min=144, max=4192, avg=291.01, stdev=82.35 00:10:35.183 lat (usec): min=156, max=7958, avg=308.32, stdev=122.90 00:10:35.183 clat percentiles (usec): 00:10:35.183 | 1.00th=[ 188], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 241], 00:10:35.183 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:10:35.183 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 383], 00:10:35.183 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 881], 99.95th=[ 1696], 00:10:35.183 | 99.99th=[ 3163] 00:10:35.183 bw ( KiB/s): min=11440, max=13192, per=29.54%, avg=12726.67, stdev=641.71, samples=6 00:10:35.183 iops : min= 2860, max= 3298, avg=3181.67, stdev=160.43, samples=6 00:10:35.183 lat (usec) : 250=26.07%, 500=73.52%, 750=0.30%, 1000=0.03% 00:10:35.183 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01% 00:10:35.183 cpu : usr=0.94%, sys=4.83%, ctx=10276, majf=0, minf=1 00:10:35.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 issued rwts: total=10274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.183 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68891: Mon Jul 15 19:01:02 2024 00:10:35.183 read: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(35.8MiB/2926msec) 00:10:35.183 slat (nsec): min=10821, max=87220, avg=16839.95, stdev=5950.44 00:10:35.183 clat (usec): min=174, max=7517, avg=300.33, stdev=104.23 00:10:35.183 lat (usec): min=188, max=7532, avg=317.17, stdev=104.84 00:10:35.183 clat percentiles (usec): 00:10:35.183 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 251], 00:10:35.183 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 310], 00:10:35.183 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 388], 00:10:35.183 | 99.00th=[ 465], 99.50th=[ 529], 99.90th=[ 717], 99.95th=[ 1778], 00:10:35.183 | 99.99th=[ 7504] 00:10:35.183 bw ( KiB/s): min=12600, max=13048, per=29.79%, avg=12836.80, stdev=166.81, samples=5 00:10:35.183 iops : min= 3150, max= 3262, avg=3209.20, stdev=41.70, samples=5 00:10:35.183 lat (usec) : 250=18.74%, 500=80.62%, 750=0.53%, 1000=0.03% 00:10:35.183 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:10:35.183 cpu : usr=1.85%, sys=4.10%, ctx=9163, majf=0, minf=1 00:10:35.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.183 issued rwts: total=9159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.183 00:10:35.183 Run status group 0 (all jobs): 00:10:35.183 READ: bw=42.1MiB/s (44.1MB/s), 11.0MiB/s-12.6MiB/s (11.5MB/s-13.2MB/s), io=156MiB (163MB), run=2926-3704msec 00:10:35.183 00:10:35.183 Disk stats (read/write): 00:10:35.183 nvme0n1: ios=9417/0, merge=0/0, ticks=3114/0, in_queue=3114, util=95.16% 00:10:35.183 nvme0n2: ios=10483/0, merge=0/0, ticks=3311/0, in_queue=3311, util=95.42% 00:10:35.183 nvme0n3: ios=9995/0, merge=0/0, ticks=3003/0, in_queue=3003, util=96.42% 00:10:35.183 nvme0n4: ios=9022/0, merge=0/0, ticks=2774/0, in_queue=2774, util=96.62% 00:10:35.183 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.183 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:35.441 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.441 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.698 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.698 19:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.955 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.955 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:36.213 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.213 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68842 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:36.471 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.729 nvmf hotplug test: fio failed as expected 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.729 19:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.987 rmmod nvme_tcp 00:10:36.987 rmmod nvme_fabrics 00:10:36.987 rmmod nvme_keyring 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68457 ']' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68457 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68457 ']' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68457 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68457 00:10:36.987 killing process with pid 68457 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68457' 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68457 00:10:36.987 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68457 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:37.245 ************************************ 00:10:37.245 END TEST nvmf_fio_target 00:10:37.245 ************************************ 00:10:37.245 00:10:37.245 real 0m19.376s 00:10:37.245 user 1m13.824s 00:10:37.245 sys 0m9.181s 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.245 19:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.245 19:01:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.245 19:01:04 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.245 19:01:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.245 19:01:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.245 19:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.245 ************************************ 00:10:37.245 START TEST nvmf_bdevio 00:10:37.245 ************************************ 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.245 * Looking for test storage... 00:10:37.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.245 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.246 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:37.504 Cannot find device "nvmf_tgt_br" 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.504 Cannot find device "nvmf_tgt_br2" 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:37.504 Cannot find device "nvmf_tgt_br" 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:37.504 Cannot find device "nvmf_tgt_br2" 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.504 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:37.763 00:10:37.763 --- 10.0.0.2 ping statistics --- 00:10:37.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.763 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:37.763 00:10:37.763 --- 10.0.0.3 ping statistics --- 00:10:37.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.763 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:37.763 00:10:37.763 --- 10.0.0.1 ping statistics --- 00:10:37.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.763 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69154 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69154 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69154 ']' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.763 19:01:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.763 [2024-07-15 19:01:04.904677] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:37.763 [2024-07-15 19:01:04.904753] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.763 [2024-07-15 19:01:05.042490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.023 [2024-07-15 19:01:05.155568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.023 [2024-07-15 19:01:05.155640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.023 [2024-07-15 19:01:05.155666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.023 [2024-07-15 19:01:05.155674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.023 [2024-07-15 19:01:05.155680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.023 [2024-07-15 19:01:05.155834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:38.023 [2024-07-15 19:01:05.156596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:38.023 [2024-07-15 19:01:05.156731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:38.023 [2024-07-15 19:01:05.156731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.023 [2024-07-15 19:01:05.210160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.663 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.664 [2024-07-15 19:01:05.873483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.664 Malloc0 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.664 [2024-07-15 19:01:05.947737] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.664 19:01:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:38.922 { 00:10:38.922 "params": { 00:10:38.922 "name": "Nvme$subsystem", 00:10:38.922 "trtype": "$TEST_TRANSPORT", 00:10:38.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.922 "adrfam": "ipv4", 00:10:38.922 "trsvcid": "$NVMF_PORT", 00:10:38.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.922 "hdgst": ${hdgst:-false}, 00:10:38.922 "ddgst": ${ddgst:-false} 00:10:38.922 }, 00:10:38.922 "method": "bdev_nvme_attach_controller" 00:10:38.922 } 00:10:38.922 EOF 00:10:38.922 )") 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:38.922 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:38.923 19:01:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:38.923 "params": { 00:10:38.923 "name": "Nvme1", 00:10:38.923 "trtype": "tcp", 00:10:38.923 "traddr": "10.0.0.2", 00:10:38.923 "adrfam": "ipv4", 00:10:38.923 "trsvcid": "4420", 00:10:38.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.923 "hdgst": false, 00:10:38.923 "ddgst": false 00:10:38.923 }, 00:10:38.923 "method": "bdev_nvme_attach_controller" 00:10:38.923 }' 00:10:38.923 [2024-07-15 19:01:06.010357] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:10:38.923 [2024-07-15 19:01:06.010562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69190 ] 00:10:38.923 [2024-07-15 19:01:06.160299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.180 [2024-07-15 19:01:06.274339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.180 [2024-07-15 19:01:06.274485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.180 [2024-07-15 19:01:06.274494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.180 [2024-07-15 19:01:06.340403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:39.180 I/O targets: 00:10:39.180 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:39.180 00:10:39.180 00:10:39.180 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.180 http://cunit.sourceforge.net/ 00:10:39.180 00:10:39.180 00:10:39.180 Suite: bdevio tests on: Nvme1n1 00:10:39.180 Test: blockdev write read block ...passed 00:10:39.180 Test: blockdev write zeroes read block ...passed 00:10:39.439 Test: blockdev write zeroes read no split ...passed 00:10:39.439 Test: blockdev write zeroes read split ...passed 00:10:39.439 Test: blockdev write zeroes read split partial ...passed 00:10:39.439 Test: blockdev reset ...[2024-07-15 19:01:06.496019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:39.439 [2024-07-15 19:01:06.496332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1671730 (9): Bad file descriptor 00:10:39.439 [2024-07-15 19:01:06.508196] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.439 passed 00:10:39.439 Test: blockdev write read 8 blocks ...passed 00:10:39.439 Test: blockdev write read size > 128k ...passed 00:10:39.439 Test: blockdev write read invalid size ...passed 00:10:39.439 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.439 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.439 Test: blockdev write read max offset ...passed 00:10:39.439 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.439 Test: blockdev writev readv 8 blocks ...passed 00:10:39.439 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.439 Test: blockdev writev readv block ...passed 00:10:39.439 Test: blockdev writev readv size > 128k ...passed 00:10:39.439 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.439 Test: blockdev comparev and writev ...[2024-07-15 19:01:06.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.439 [2024-07-15 19:01:06.517781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:39.439 [2024-07-15 19:01:06.517801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.439 [2024-07-15 19:01:06.517813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:39.439 [2024-07-15 19:01:06.518167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.439 [2024-07-15 19:01:06.518193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:39.439 [2024-07-15 19:01:06.518210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.439 [2024-07-15 19:01:06.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:39.439 [2024-07-15 19:01:06.518705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.440 [2024-07-15 19:01:06.518731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.518747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.440 [2024-07-15 19:01:06.518758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.519039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.440 [2024-07-15 19:01:06.519064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.519080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.440 [2024-07-15 19:01:06.519090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:39.440 passed 00:10:39.440 Test: blockdev nvme passthru rw ...passed 00:10:39.440 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:01:06.519986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.440 [2024-07-15 19:01:06.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.520147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.440 [2024-07-15 19:01:06.520171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.520280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.440 [2024-07-15 19:01:06.520302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:39.440 [2024-07-15 19:01:06.520425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.440 [2024-07-15 19:01:06.520447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:39.440 passed 00:10:39.440 Test: blockdev nvme admin passthru ...passed 00:10:39.440 Test: blockdev copy ...passed 00:10:39.440 00:10:39.440 Run Summary: Type Total Ran Passed Failed Inactive 00:10:39.440 suites 1 1 n/a 0 0 00:10:39.440 tests 23 23 23 0 0 00:10:39.440 asserts 152 152 152 0 n/a 00:10:39.440 00:10:39.440 Elapsed time = 0.155 seconds 00:10:39.697 19:01:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.698 rmmod nvme_tcp 00:10:39.698 rmmod nvme_fabrics 00:10:39.698 rmmod nvme_keyring 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69154 ']' 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69154 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69154 ']' 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69154 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69154 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:39.698 killing process with pid 69154 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69154' 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69154 00:10:39.698 19:01:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69154 00:10:39.955 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.955 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:39.956 00:10:39.956 real 0m2.740s 00:10:39.956 user 0m9.165s 00:10:39.956 sys 0m0.768s 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.956 19:01:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.956 ************************************ 00:10:39.956 END TEST nvmf_bdevio 00:10:39.956 ************************************ 00:10:39.956 19:01:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.956 19:01:07 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:39.956 19:01:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.956 19:01:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.956 19:01:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.956 ************************************ 00:10:39.956 START TEST nvmf_auth_target 00:10:39.956 ************************************ 00:10:39.956 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:40.215 * Looking for test storage... 00:10:40.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.215 Cannot find device "nvmf_tgt_br" 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.215 Cannot find device "nvmf_tgt_br2" 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.215 Cannot find device "nvmf_tgt_br" 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.215 Cannot find device "nvmf_tgt_br2" 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.215 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.473 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:40.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:40.474 00:10:40.474 --- 10.0.0.2 ping statistics --- 00:10:40.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.474 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:40.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:40.474 00:10:40.474 --- 10.0.0.3 ping statistics --- 00:10:40.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.474 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:40.474 00:10:40.474 --- 10.0.0.1 ping statistics --- 00:10:40.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.474 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69364 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69364 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69364 ']' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.474 19:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69396 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cbfda105b67dbc55d78b7934444aac99c8072586b811ef9e 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5ip 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cbfda105b67dbc55d78b7934444aac99c8072586b811ef9e 0 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cbfda105b67dbc55d78b7934444aac99c8072586b811ef9e 0 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cbfda105b67dbc55d78b7934444aac99c8072586b811ef9e 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5ip 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5ip 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.5ip 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b7043886a63a61ae05260b0ead9848f8750674905d52bc241ad2761651110814 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qDB 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b7043886a63a61ae05260b0ead9848f8750674905d52bc241ad2761651110814 3 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b7043886a63a61ae05260b0ead9848f8750674905d52bc241ad2761651110814 3 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b7043886a63a61ae05260b0ead9848f8750674905d52bc241ad2761651110814 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qDB 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qDB 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.qDB 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5ceeb35a5d390f232cb9abeba9f25391 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lx1 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5ceeb35a5d390f232cb9abeba9f25391 1 00:10:41.845 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5ceeb35a5d390f232cb9abeba9f25391 1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5ceeb35a5d390f232cb9abeba9f25391 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lx1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lx1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Lx1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=93472d45b6c524f8be2a74e1d9a02a326ed0af8de6fcd9af 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.smS 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 93472d45b6c524f8be2a74e1d9a02a326ed0af8de6fcd9af 2 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 93472d45b6c524f8be2a74e1d9a02a326ed0af8de6fcd9af 2 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=93472d45b6c524f8be2a74e1d9a02a326ed0af8de6fcd9af 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:41.846 19:01:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.smS 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.smS 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.smS 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ddaf8895472d422fdbaea1b064dfb01fe2c21dcd6f5c980 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.t0w 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ddaf8895472d422fdbaea1b064dfb01fe2c21dcd6f5c980 2 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ddaf8895472d422fdbaea1b064dfb01fe2c21dcd6f5c980 2 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ddaf8895472d422fdbaea1b064dfb01fe2c21dcd6f5c980 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.t0w 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.t0w 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.t0w 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d4d014016fe7c958a1ed95762e7e8b69 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O4q 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d4d014016fe7c958a1ed95762e7e8b69 1 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d4d014016fe7c958a1ed95762e7e8b69 1 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d4d014016fe7c958a1ed95762e7e8b69 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:41.846 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O4q 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O4q 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.O4q 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c99faac3b68213effb27ebb94c047413f5314adb5ed1ba5b7470eb6057113a77 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NsQ 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c99faac3b68213effb27ebb94c047413f5314adb5ed1ba5b7470eb6057113a77 3 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c99faac3b68213effb27ebb94c047413f5314adb5ed1ba5b7470eb6057113a77 3 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c99faac3b68213effb27ebb94c047413f5314adb5ed1ba5b7470eb6057113a77 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NsQ 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NsQ 00:10:42.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.NsQ 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69364 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69364 ']' 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.103 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69396 /var/tmp/host.sock 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69396 ']' 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.361 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5ip 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5ip 00:10:42.618 19:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5ip 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.qDB ]] 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qDB 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qDB 00:10:42.876 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qDB 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Lx1 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Lx1 00:10:43.132 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Lx1 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.smS ]] 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.smS 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.smS 00:10:43.389 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.smS 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.t0w 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.t0w 00:10:43.696 19:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.t0w 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.O4q ]] 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O4q 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O4q 00:10:43.956 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O4q 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.NsQ 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.NsQ 00:10:44.214 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.NsQ 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:44.473 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.732 19:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.733 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.733 19:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.991 00:10:44.991 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.991 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.991 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.249 { 00:10:45.249 "cntlid": 1, 00:10:45.249 "qid": 0, 00:10:45.249 "state": "enabled", 00:10:45.249 "thread": "nvmf_tgt_poll_group_000", 00:10:45.249 "listen_address": { 00:10:45.249 "trtype": "TCP", 00:10:45.249 "adrfam": "IPv4", 00:10:45.249 "traddr": "10.0.0.2", 00:10:45.249 "trsvcid": "4420" 00:10:45.249 }, 00:10:45.249 "peer_address": { 00:10:45.249 "trtype": "TCP", 00:10:45.249 "adrfam": "IPv4", 00:10:45.249 "traddr": "10.0.0.1", 00:10:45.249 "trsvcid": "43976" 00:10:45.249 }, 00:10:45.249 "auth": { 00:10:45.249 "state": "completed", 00:10:45.249 "digest": "sha256", 00:10:45.249 "dhgroup": "null" 00:10:45.249 } 00:10:45.249 } 00:10:45.249 ]' 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.249 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.508 19:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.777 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.777 19:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.777 { 00:10:50.777 "cntlid": 3, 00:10:50.777 "qid": 0, 00:10:50.777 "state": "enabled", 00:10:50.777 "thread": "nvmf_tgt_poll_group_000", 00:10:50.777 "listen_address": { 00:10:50.777 "trtype": "TCP", 00:10:50.777 "adrfam": "IPv4", 00:10:50.777 "traddr": "10.0.0.2", 00:10:50.777 "trsvcid": "4420" 00:10:50.777 }, 00:10:50.777 "peer_address": { 00:10:50.777 "trtype": "TCP", 00:10:50.777 "adrfam": "IPv4", 00:10:50.777 "traddr": "10.0.0.1", 00:10:50.777 "trsvcid": "44006" 00:10:50.777 }, 00:10:50.777 "auth": { 00:10:50.777 "state": "completed", 00:10:50.777 "digest": "sha256", 00:10:50.777 "dhgroup": "null" 00:10:50.777 } 00:10:50.777 } 00:10:50.777 ]' 00:10:50.777 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.036 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.295 19:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.290 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.548 00:10:52.807 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.807 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.807 19:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.066 { 00:10:53.066 "cntlid": 5, 00:10:53.066 "qid": 0, 00:10:53.066 "state": "enabled", 00:10:53.066 "thread": "nvmf_tgt_poll_group_000", 00:10:53.066 "listen_address": { 00:10:53.066 "trtype": "TCP", 00:10:53.066 "adrfam": "IPv4", 00:10:53.066 "traddr": "10.0.0.2", 00:10:53.066 "trsvcid": "4420" 00:10:53.066 }, 00:10:53.066 "peer_address": { 00:10:53.066 "trtype": "TCP", 00:10:53.066 "adrfam": "IPv4", 00:10:53.066 "traddr": "10.0.0.1", 00:10:53.066 "trsvcid": "44050" 00:10:53.066 }, 00:10:53.066 "auth": { 00:10:53.066 "state": "completed", 00:10:53.066 "digest": "sha256", 00:10:53.066 "dhgroup": "null" 00:10:53.066 } 00:10:53.066 } 00:10:53.066 ]' 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.066 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.324 19:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.890 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.149 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.713 00:10:54.713 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.713 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.713 19:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.970 { 00:10:54.970 "cntlid": 7, 00:10:54.970 "qid": 0, 00:10:54.970 "state": "enabled", 00:10:54.970 "thread": "nvmf_tgt_poll_group_000", 00:10:54.970 "listen_address": { 00:10:54.970 "trtype": "TCP", 00:10:54.970 "adrfam": "IPv4", 00:10:54.970 "traddr": "10.0.0.2", 00:10:54.970 "trsvcid": "4420" 00:10:54.970 }, 00:10:54.970 "peer_address": { 00:10:54.970 "trtype": "TCP", 00:10:54.970 "adrfam": "IPv4", 00:10:54.970 "traddr": "10.0.0.1", 00:10:54.970 "trsvcid": "45940" 00:10:54.970 }, 00:10:54.970 "auth": { 00:10:54.970 "state": "completed", 00:10:54.970 "digest": "sha256", 00:10:54.970 "dhgroup": "null" 00:10:54.970 } 00:10:54.970 } 00:10:54.970 ]' 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.970 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.228 19:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.161 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.418 19:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.419 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.419 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.676 00:10:56.676 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.676 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.676 19:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.934 { 00:10:56.934 "cntlid": 9, 00:10:56.934 "qid": 0, 00:10:56.934 "state": "enabled", 00:10:56.934 "thread": "nvmf_tgt_poll_group_000", 00:10:56.934 "listen_address": { 00:10:56.934 "trtype": "TCP", 00:10:56.934 "adrfam": "IPv4", 00:10:56.934 "traddr": "10.0.0.2", 00:10:56.934 "trsvcid": "4420" 00:10:56.934 }, 00:10:56.934 "peer_address": { 00:10:56.934 "trtype": "TCP", 00:10:56.934 "adrfam": "IPv4", 00:10:56.934 "traddr": "10.0.0.1", 00:10:56.934 "trsvcid": "45960" 00:10:56.934 }, 00:10:56.934 "auth": { 00:10:56.934 "state": "completed", 00:10:56.934 "digest": "sha256", 00:10:56.934 "dhgroup": "ffdhe2048" 00:10:56.934 } 00:10:56.934 } 00:10:56.934 ]' 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.934 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.192 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.192 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.192 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.192 19:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.125 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.477 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.781 00:10:58.782 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.782 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.782 19:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.782 { 00:10:58.782 "cntlid": 11, 00:10:58.782 "qid": 0, 00:10:58.782 "state": "enabled", 00:10:58.782 "thread": "nvmf_tgt_poll_group_000", 00:10:58.782 "listen_address": { 00:10:58.782 "trtype": "TCP", 00:10:58.782 "adrfam": "IPv4", 00:10:58.782 "traddr": "10.0.0.2", 00:10:58.782 "trsvcid": "4420" 00:10:58.782 }, 00:10:58.782 "peer_address": { 00:10:58.782 "trtype": "TCP", 00:10:58.782 "adrfam": "IPv4", 00:10:58.782 "traddr": "10.0.0.1", 00:10:58.782 "trsvcid": "45996" 00:10:58.782 }, 00:10:58.782 "auth": { 00:10:58.782 "state": "completed", 00:10:58.782 "digest": "sha256", 00:10:58.782 "dhgroup": "ffdhe2048" 00:10:58.782 } 00:10:58.782 } 00:10:58.782 ]' 00:10:58.782 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.038 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.295 19:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.227 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.792 00:11:00.792 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.792 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.792 19:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.051 { 00:11:01.051 "cntlid": 13, 00:11:01.051 "qid": 0, 00:11:01.051 "state": "enabled", 00:11:01.051 "thread": "nvmf_tgt_poll_group_000", 00:11:01.051 "listen_address": { 00:11:01.051 "trtype": "TCP", 00:11:01.051 "adrfam": "IPv4", 00:11:01.051 "traddr": "10.0.0.2", 00:11:01.051 "trsvcid": "4420" 00:11:01.051 }, 00:11:01.051 "peer_address": { 00:11:01.051 "trtype": "TCP", 00:11:01.051 "adrfam": "IPv4", 00:11:01.051 "traddr": "10.0.0.1", 00:11:01.051 "trsvcid": "46042" 00:11:01.051 }, 00:11:01.051 "auth": { 00:11:01.051 "state": "completed", 00:11:01.051 "digest": "sha256", 00:11:01.051 "dhgroup": "ffdhe2048" 00:11:01.051 } 00:11:01.051 } 00:11:01.051 ]' 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.051 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.311 19:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.249 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.507 00:11:02.780 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.780 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.780 19:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.040 { 00:11:03.040 "cntlid": 15, 00:11:03.040 "qid": 0, 00:11:03.040 "state": "enabled", 00:11:03.040 "thread": "nvmf_tgt_poll_group_000", 00:11:03.040 "listen_address": { 00:11:03.040 "trtype": "TCP", 00:11:03.040 "adrfam": "IPv4", 00:11:03.040 "traddr": "10.0.0.2", 00:11:03.040 "trsvcid": "4420" 00:11:03.040 }, 00:11:03.040 "peer_address": { 00:11:03.040 "trtype": "TCP", 00:11:03.040 "adrfam": "IPv4", 00:11:03.040 "traddr": "10.0.0.1", 00:11:03.040 "trsvcid": "46066" 00:11:03.040 }, 00:11:03.040 "auth": { 00:11:03.040 "state": "completed", 00:11:03.040 "digest": "sha256", 00:11:03.040 "dhgroup": "ffdhe2048" 00:11:03.040 } 00:11:03.040 } 00:11:03.040 ]' 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.040 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.298 19:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.341 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.598 00:11:04.598 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.598 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.598 19:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.855 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.855 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.855 19:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.855 19:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.113 { 00:11:05.113 "cntlid": 17, 00:11:05.113 "qid": 0, 00:11:05.113 "state": "enabled", 00:11:05.113 "thread": "nvmf_tgt_poll_group_000", 00:11:05.113 "listen_address": { 00:11:05.113 "trtype": "TCP", 00:11:05.113 "adrfam": "IPv4", 00:11:05.113 "traddr": "10.0.0.2", 00:11:05.113 "trsvcid": "4420" 00:11:05.113 }, 00:11:05.113 "peer_address": { 00:11:05.113 "trtype": "TCP", 00:11:05.113 "adrfam": "IPv4", 00:11:05.113 "traddr": "10.0.0.1", 00:11:05.113 "trsvcid": "60302" 00:11:05.113 }, 00:11:05.113 "auth": { 00:11:05.113 "state": "completed", 00:11:05.113 "digest": "sha256", 00:11:05.113 "dhgroup": "ffdhe3072" 00:11:05.113 } 00:11:05.113 } 00:11:05.113 ]' 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.113 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.371 19:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:06.305 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.563 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.821 00:11:06.821 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.821 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.821 19:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.080 { 00:11:07.080 "cntlid": 19, 00:11:07.080 "qid": 0, 00:11:07.080 "state": "enabled", 00:11:07.080 "thread": "nvmf_tgt_poll_group_000", 00:11:07.080 "listen_address": { 00:11:07.080 "trtype": "TCP", 00:11:07.080 "adrfam": "IPv4", 00:11:07.080 "traddr": "10.0.0.2", 00:11:07.080 "trsvcid": "4420" 00:11:07.080 }, 00:11:07.080 "peer_address": { 00:11:07.080 "trtype": "TCP", 00:11:07.080 "adrfam": "IPv4", 00:11:07.080 "traddr": "10.0.0.1", 00:11:07.080 "trsvcid": "60338" 00:11:07.080 }, 00:11:07.080 "auth": { 00:11:07.080 "state": "completed", 00:11:07.080 "digest": "sha256", 00:11:07.080 "dhgroup": "ffdhe3072" 00:11:07.080 } 00:11:07.080 } 00:11:07.080 ]' 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.080 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.337 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:07.337 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.337 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.337 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.337 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.594 19:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.159 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.723 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.724 19:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.981 00:11:08.981 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.981 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.981 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.239 { 00:11:09.239 "cntlid": 21, 00:11:09.239 "qid": 0, 00:11:09.239 "state": "enabled", 00:11:09.239 "thread": "nvmf_tgt_poll_group_000", 00:11:09.239 "listen_address": { 00:11:09.239 "trtype": "TCP", 00:11:09.239 "adrfam": "IPv4", 00:11:09.239 "traddr": "10.0.0.2", 00:11:09.239 "trsvcid": "4420" 00:11:09.239 }, 00:11:09.239 "peer_address": { 00:11:09.239 "trtype": "TCP", 00:11:09.239 "adrfam": "IPv4", 00:11:09.239 "traddr": "10.0.0.1", 00:11:09.239 "trsvcid": "60370" 00:11:09.239 }, 00:11:09.239 "auth": { 00:11:09.239 "state": "completed", 00:11:09.239 "digest": "sha256", 00:11:09.239 "dhgroup": "ffdhe3072" 00:11:09.239 } 00:11:09.239 } 00:11:09.239 ]' 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.239 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.500 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.500 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.500 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.500 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.500 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.760 19:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.337 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.608 19:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.175 00:11:11.175 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.175 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.175 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.433 { 00:11:11.433 "cntlid": 23, 00:11:11.433 "qid": 0, 00:11:11.433 "state": "enabled", 00:11:11.433 "thread": "nvmf_tgt_poll_group_000", 00:11:11.433 "listen_address": { 00:11:11.433 "trtype": "TCP", 00:11:11.433 "adrfam": "IPv4", 00:11:11.433 "traddr": "10.0.0.2", 00:11:11.433 "trsvcid": "4420" 00:11:11.433 }, 00:11:11.433 "peer_address": { 00:11:11.433 "trtype": "TCP", 00:11:11.433 "adrfam": "IPv4", 00:11:11.433 "traddr": "10.0.0.1", 00:11:11.433 "trsvcid": "60412" 00:11:11.433 }, 00:11:11.433 "auth": { 00:11:11.433 "state": "completed", 00:11:11.433 "digest": "sha256", 00:11:11.433 "dhgroup": "ffdhe3072" 00:11:11.433 } 00:11:11.433 } 00:11:11.433 ]' 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.433 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.692 19:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.628 19:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.198 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.198 { 00:11:13.198 "cntlid": 25, 00:11:13.198 "qid": 0, 00:11:13.198 "state": "enabled", 00:11:13.198 "thread": "nvmf_tgt_poll_group_000", 00:11:13.198 "listen_address": { 00:11:13.198 "trtype": "TCP", 00:11:13.198 "adrfam": "IPv4", 00:11:13.198 "traddr": "10.0.0.2", 00:11:13.198 "trsvcid": "4420" 00:11:13.198 }, 00:11:13.198 "peer_address": { 00:11:13.198 "trtype": "TCP", 00:11:13.198 "adrfam": "IPv4", 00:11:13.198 "traddr": "10.0.0.1", 00:11:13.198 "trsvcid": "60426" 00:11:13.198 }, 00:11:13.198 "auth": { 00:11:13.198 "state": "completed", 00:11:13.198 "digest": "sha256", 00:11:13.198 "dhgroup": "ffdhe4096" 00:11:13.198 } 00:11:13.198 } 00:11:13.198 ]' 00:11:13.198 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.458 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.716 19:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.283 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.542 19:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.800 00:11:15.058 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.058 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.058 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.316 { 00:11:15.316 "cntlid": 27, 00:11:15.316 "qid": 0, 00:11:15.316 "state": "enabled", 00:11:15.316 "thread": "nvmf_tgt_poll_group_000", 00:11:15.316 "listen_address": { 00:11:15.316 "trtype": "TCP", 00:11:15.316 "adrfam": "IPv4", 00:11:15.316 "traddr": "10.0.0.2", 00:11:15.316 "trsvcid": "4420" 00:11:15.316 }, 00:11:15.316 "peer_address": { 00:11:15.316 "trtype": "TCP", 00:11:15.316 "adrfam": "IPv4", 00:11:15.316 "traddr": "10.0.0.1", 00:11:15.316 "trsvcid": "56048" 00:11:15.316 }, 00:11:15.316 "auth": { 00:11:15.316 "state": "completed", 00:11:15.316 "digest": "sha256", 00:11:15.316 "dhgroup": "ffdhe4096" 00:11:15.316 } 00:11:15.316 } 00:11:15.316 ]' 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.316 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.575 19:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.507 19:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.070 00:11:17.070 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.070 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.070 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.327 { 00:11:17.327 "cntlid": 29, 00:11:17.327 "qid": 0, 00:11:17.327 "state": "enabled", 00:11:17.327 "thread": "nvmf_tgt_poll_group_000", 00:11:17.327 "listen_address": { 00:11:17.327 "trtype": "TCP", 00:11:17.327 "adrfam": "IPv4", 00:11:17.327 "traddr": "10.0.0.2", 00:11:17.327 "trsvcid": "4420" 00:11:17.327 }, 00:11:17.327 "peer_address": { 00:11:17.327 "trtype": "TCP", 00:11:17.327 "adrfam": "IPv4", 00:11:17.327 "traddr": "10.0.0.1", 00:11:17.327 "trsvcid": "56068" 00:11:17.327 }, 00:11:17.327 "auth": { 00:11:17.327 "state": "completed", 00:11:17.327 "digest": "sha256", 00:11:17.327 "dhgroup": "ffdhe4096" 00:11:17.327 } 00:11:17.327 } 00:11:17.327 ]' 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.327 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.592 19:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.524 19:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.782 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.040 { 00:11:19.040 "cntlid": 31, 00:11:19.040 "qid": 0, 00:11:19.040 "state": "enabled", 00:11:19.040 "thread": "nvmf_tgt_poll_group_000", 00:11:19.040 "listen_address": { 00:11:19.040 "trtype": "TCP", 00:11:19.040 "adrfam": "IPv4", 00:11:19.040 "traddr": "10.0.0.2", 00:11:19.040 "trsvcid": "4420" 00:11:19.040 }, 00:11:19.040 "peer_address": { 00:11:19.040 "trtype": "TCP", 00:11:19.040 "adrfam": "IPv4", 00:11:19.040 "traddr": "10.0.0.1", 00:11:19.040 "trsvcid": "56102" 00:11:19.040 }, 00:11:19.040 "auth": { 00:11:19.040 "state": "completed", 00:11:19.040 "digest": "sha256", 00:11:19.040 "dhgroup": "ffdhe4096" 00:11:19.040 } 00:11:19.040 } 00:11:19.040 ]' 00:11:19.040 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.299 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.557 19:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.494 19:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.061 00:11:21.061 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.061 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.061 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.320 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.320 { 00:11:21.321 "cntlid": 33, 00:11:21.321 "qid": 0, 00:11:21.321 "state": "enabled", 00:11:21.321 "thread": "nvmf_tgt_poll_group_000", 00:11:21.321 "listen_address": { 00:11:21.321 "trtype": "TCP", 00:11:21.321 "adrfam": "IPv4", 00:11:21.321 "traddr": "10.0.0.2", 00:11:21.321 "trsvcid": "4420" 00:11:21.321 }, 00:11:21.321 "peer_address": { 00:11:21.321 "trtype": "TCP", 00:11:21.321 "adrfam": "IPv4", 00:11:21.321 "traddr": "10.0.0.1", 00:11:21.321 "trsvcid": "56138" 00:11:21.321 }, 00:11:21.321 "auth": { 00:11:21.321 "state": "completed", 00:11:21.321 "digest": "sha256", 00:11:21.321 "dhgroup": "ffdhe6144" 00:11:21.321 } 00:11:21.321 } 00:11:21.321 ]' 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.321 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.579 19:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.526 19:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.093 00:11:23.093 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.093 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.093 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.352 { 00:11:23.352 "cntlid": 35, 00:11:23.352 "qid": 0, 00:11:23.352 "state": "enabled", 00:11:23.352 "thread": "nvmf_tgt_poll_group_000", 00:11:23.352 "listen_address": { 00:11:23.352 "trtype": "TCP", 00:11:23.352 "adrfam": "IPv4", 00:11:23.352 "traddr": "10.0.0.2", 00:11:23.352 "trsvcid": "4420" 00:11:23.352 }, 00:11:23.352 "peer_address": { 00:11:23.352 "trtype": "TCP", 00:11:23.352 "adrfam": "IPv4", 00:11:23.352 "traddr": "10.0.0.1", 00:11:23.352 "trsvcid": "56158" 00:11:23.352 }, 00:11:23.352 "auth": { 00:11:23.352 "state": "completed", 00:11:23.352 "digest": "sha256", 00:11:23.352 "dhgroup": "ffdhe6144" 00:11:23.352 } 00:11:23.352 } 00:11:23.352 ]' 00:11:23.352 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.611 19:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.870 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.437 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.695 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.954 19:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.954 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.954 19:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.213 00:11:25.213 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.213 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.213 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.472 { 00:11:25.472 "cntlid": 37, 00:11:25.472 "qid": 0, 00:11:25.472 "state": "enabled", 00:11:25.472 "thread": "nvmf_tgt_poll_group_000", 00:11:25.472 "listen_address": { 00:11:25.472 "trtype": "TCP", 00:11:25.472 "adrfam": "IPv4", 00:11:25.472 "traddr": "10.0.0.2", 00:11:25.472 "trsvcid": "4420" 00:11:25.472 }, 00:11:25.472 "peer_address": { 00:11:25.472 "trtype": "TCP", 00:11:25.472 "adrfam": "IPv4", 00:11:25.472 "traddr": "10.0.0.1", 00:11:25.472 "trsvcid": "55174" 00:11:25.472 }, 00:11:25.472 "auth": { 00:11:25.472 "state": "completed", 00:11:25.472 "digest": "sha256", 00:11:25.472 "dhgroup": "ffdhe6144" 00:11:25.472 } 00:11:25.472 } 00:11:25.472 ]' 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.472 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.731 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.731 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.731 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.731 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.731 19:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.990 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.556 19:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:26.815 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.382 00:11:27.382 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.382 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.382 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.640 { 00:11:27.640 "cntlid": 39, 00:11:27.640 "qid": 0, 00:11:27.640 "state": "enabled", 00:11:27.640 "thread": "nvmf_tgt_poll_group_000", 00:11:27.640 "listen_address": { 00:11:27.640 "trtype": "TCP", 00:11:27.640 "adrfam": "IPv4", 00:11:27.640 "traddr": "10.0.0.2", 00:11:27.640 "trsvcid": "4420" 00:11:27.640 }, 00:11:27.640 "peer_address": { 00:11:27.640 "trtype": "TCP", 00:11:27.640 "adrfam": "IPv4", 00:11:27.640 "traddr": "10.0.0.1", 00:11:27.640 "trsvcid": "55204" 00:11:27.640 }, 00:11:27.640 "auth": { 00:11:27.640 "state": "completed", 00:11:27.640 "digest": "sha256", 00:11:27.640 "dhgroup": "ffdhe6144" 00:11:27.640 } 00:11:27.640 } 00:11:27.640 ]' 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.640 19:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.208 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:28.775 19:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.034 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.616 00:11:29.616 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.616 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.616 19:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.892 { 00:11:29.892 "cntlid": 41, 00:11:29.892 "qid": 0, 00:11:29.892 "state": "enabled", 00:11:29.892 "thread": "nvmf_tgt_poll_group_000", 00:11:29.892 "listen_address": { 00:11:29.892 "trtype": "TCP", 00:11:29.892 "adrfam": "IPv4", 00:11:29.892 "traddr": "10.0.0.2", 00:11:29.892 "trsvcid": "4420" 00:11:29.892 }, 00:11:29.892 "peer_address": { 00:11:29.892 "trtype": "TCP", 00:11:29.892 "adrfam": "IPv4", 00:11:29.892 "traddr": "10.0.0.1", 00:11:29.892 "trsvcid": "55222" 00:11:29.892 }, 00:11:29.892 "auth": { 00:11:29.892 "state": "completed", 00:11:29.892 "digest": "sha256", 00:11:29.892 "dhgroup": "ffdhe8192" 00:11:29.892 } 00:11:29.892 } 00:11:29.892 ]' 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.892 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.151 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:30.151 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.151 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.151 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.151 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.411 19:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.978 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:30.979 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.237 19:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.804 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.063 { 00:11:32.063 "cntlid": 43, 00:11:32.063 "qid": 0, 00:11:32.063 "state": "enabled", 00:11:32.063 "thread": "nvmf_tgt_poll_group_000", 00:11:32.063 "listen_address": { 00:11:32.063 "trtype": "TCP", 00:11:32.063 "adrfam": "IPv4", 00:11:32.063 "traddr": "10.0.0.2", 00:11:32.063 "trsvcid": "4420" 00:11:32.063 }, 00:11:32.063 "peer_address": { 00:11:32.063 "trtype": "TCP", 00:11:32.063 "adrfam": "IPv4", 00:11:32.063 "traddr": "10.0.0.1", 00:11:32.063 "trsvcid": "55250" 00:11:32.063 }, 00:11:32.063 "auth": { 00:11:32.063 "state": "completed", 00:11:32.063 "digest": "sha256", 00:11:32.063 "dhgroup": "ffdhe8192" 00:11:32.063 } 00:11:32.063 } 00:11:32.063 ]' 00:11:32.063 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.322 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.580 19:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.517 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.776 19:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.342 00:11:34.342 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.342 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.342 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.601 { 00:11:34.601 "cntlid": 45, 00:11:34.601 "qid": 0, 00:11:34.601 "state": "enabled", 00:11:34.601 "thread": "nvmf_tgt_poll_group_000", 00:11:34.601 "listen_address": { 00:11:34.601 "trtype": "TCP", 00:11:34.601 "adrfam": "IPv4", 00:11:34.601 "traddr": "10.0.0.2", 00:11:34.601 "trsvcid": "4420" 00:11:34.601 }, 00:11:34.601 "peer_address": { 00:11:34.601 "trtype": "TCP", 00:11:34.601 "adrfam": "IPv4", 00:11:34.601 "traddr": "10.0.0.1", 00:11:34.601 "trsvcid": "55274" 00:11:34.601 }, 00:11:34.601 "auth": { 00:11:34.601 "state": "completed", 00:11:34.601 "digest": "sha256", 00:11:34.601 "dhgroup": "ffdhe8192" 00:11:34.601 } 00:11:34.601 } 00:11:34.601 ]' 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.601 19:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.167 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.733 19:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.992 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.925 00:11:36.925 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.925 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.925 19:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.925 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.925 { 00:11:36.925 "cntlid": 47, 00:11:36.925 "qid": 0, 00:11:36.925 "state": "enabled", 00:11:36.925 "thread": "nvmf_tgt_poll_group_000", 00:11:36.925 "listen_address": { 00:11:36.925 "trtype": "TCP", 00:11:36.925 "adrfam": "IPv4", 00:11:36.925 "traddr": "10.0.0.2", 00:11:36.925 "trsvcid": "4420" 00:11:36.925 }, 00:11:36.925 "peer_address": { 00:11:36.926 "trtype": "TCP", 00:11:36.926 "adrfam": "IPv4", 00:11:36.926 "traddr": "10.0.0.1", 00:11:36.926 "trsvcid": "49176" 00:11:36.926 }, 00:11:36.926 "auth": { 00:11:36.926 "state": "completed", 00:11:36.926 "digest": "sha256", 00:11:36.926 "dhgroup": "ffdhe8192" 00:11:36.926 } 00:11:36.926 } 00:11:36.926 ]' 00:11:36.926 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.184 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.442 19:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.374 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.940 00:11:38.940 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.940 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.940 19:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.198 { 00:11:39.198 "cntlid": 49, 00:11:39.198 "qid": 0, 00:11:39.198 "state": "enabled", 00:11:39.198 "thread": "nvmf_tgt_poll_group_000", 00:11:39.198 "listen_address": { 00:11:39.198 "trtype": "TCP", 00:11:39.198 "adrfam": "IPv4", 00:11:39.198 "traddr": "10.0.0.2", 00:11:39.198 "trsvcid": "4420" 00:11:39.198 }, 00:11:39.198 "peer_address": { 00:11:39.198 "trtype": "TCP", 00:11:39.198 "adrfam": "IPv4", 00:11:39.198 "traddr": "10.0.0.1", 00:11:39.198 "trsvcid": "49198" 00:11:39.198 }, 00:11:39.198 "auth": { 00:11:39.198 "state": "completed", 00:11:39.198 "digest": "sha384", 00:11:39.198 "dhgroup": "null" 00:11:39.198 } 00:11:39.198 } 00:11:39.198 ]' 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.198 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.455 19:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:40.390 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.648 19:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.905 00:11:40.905 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.905 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.905 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.161 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.161 { 00:11:41.161 "cntlid": 51, 00:11:41.162 "qid": 0, 00:11:41.162 "state": "enabled", 00:11:41.162 "thread": "nvmf_tgt_poll_group_000", 00:11:41.162 "listen_address": { 00:11:41.162 "trtype": "TCP", 00:11:41.162 "adrfam": "IPv4", 00:11:41.162 "traddr": "10.0.0.2", 00:11:41.162 "trsvcid": "4420" 00:11:41.162 }, 00:11:41.162 "peer_address": { 00:11:41.162 "trtype": "TCP", 00:11:41.162 "adrfam": "IPv4", 00:11:41.162 "traddr": "10.0.0.1", 00:11:41.162 "trsvcid": "49232" 00:11:41.162 }, 00:11:41.162 "auth": { 00:11:41.162 "state": "completed", 00:11:41.162 "digest": "sha384", 00:11:41.162 "dhgroup": "null" 00:11:41.162 } 00:11:41.162 } 00:11:41.162 ]' 00:11:41.162 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.162 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.162 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.162 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.162 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.418 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.418 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.418 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.676 19:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.240 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.497 19:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.810 00:11:42.810 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.810 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.810 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.067 { 00:11:43.067 "cntlid": 53, 00:11:43.067 "qid": 0, 00:11:43.067 "state": "enabled", 00:11:43.067 "thread": "nvmf_tgt_poll_group_000", 00:11:43.067 "listen_address": { 00:11:43.067 "trtype": "TCP", 00:11:43.067 "adrfam": "IPv4", 00:11:43.067 "traddr": "10.0.0.2", 00:11:43.067 "trsvcid": "4420" 00:11:43.067 }, 00:11:43.067 "peer_address": { 00:11:43.067 "trtype": "TCP", 00:11:43.067 "adrfam": "IPv4", 00:11:43.067 "traddr": "10.0.0.1", 00:11:43.067 "trsvcid": "49270" 00:11:43.067 }, 00:11:43.067 "auth": { 00:11:43.067 "state": "completed", 00:11:43.067 "digest": "sha384", 00:11:43.067 "dhgroup": "null" 00:11:43.067 } 00:11:43.067 } 00:11:43.067 ]' 00:11:43.067 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.325 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.326 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.584 19:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:44.150 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.408 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.666 19:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.936 00:11:44.936 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.936 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.936 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.194 { 00:11:45.194 "cntlid": 55, 00:11:45.194 "qid": 0, 00:11:45.194 "state": "enabled", 00:11:45.194 "thread": "nvmf_tgt_poll_group_000", 00:11:45.194 "listen_address": { 00:11:45.194 "trtype": "TCP", 00:11:45.194 "adrfam": "IPv4", 00:11:45.194 "traddr": "10.0.0.2", 00:11:45.194 "trsvcid": "4420" 00:11:45.194 }, 00:11:45.194 "peer_address": { 00:11:45.194 "trtype": "TCP", 00:11:45.194 "adrfam": "IPv4", 00:11:45.194 "traddr": "10.0.0.1", 00:11:45.194 "trsvcid": "48934" 00:11:45.194 }, 00:11:45.194 "auth": { 00:11:45.194 "state": "completed", 00:11:45.194 "digest": "sha384", 00:11:45.194 "dhgroup": "null" 00:11:45.194 } 00:11:45.194 } 00:11:45.194 ]' 00:11:45.194 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.195 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.195 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.452 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.452 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.452 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.452 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.452 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.711 19:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:46.283 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.593 19:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.852 00:11:46.852 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.852 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.852 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.110 { 00:11:47.110 "cntlid": 57, 00:11:47.110 "qid": 0, 00:11:47.110 "state": "enabled", 00:11:47.110 "thread": "nvmf_tgt_poll_group_000", 00:11:47.110 "listen_address": { 00:11:47.110 "trtype": "TCP", 00:11:47.110 "adrfam": "IPv4", 00:11:47.110 "traddr": "10.0.0.2", 00:11:47.110 "trsvcid": "4420" 00:11:47.110 }, 00:11:47.110 "peer_address": { 00:11:47.110 "trtype": "TCP", 00:11:47.110 "adrfam": "IPv4", 00:11:47.110 "traddr": "10.0.0.1", 00:11:47.110 "trsvcid": "48976" 00:11:47.110 }, 00:11:47.110 "auth": { 00:11:47.110 "state": "completed", 00:11:47.110 "digest": "sha384", 00:11:47.110 "dhgroup": "ffdhe2048" 00:11:47.110 } 00:11:47.110 } 00:11:47.110 ]' 00:11:47.110 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.368 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.368 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.368 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.368 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.368 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.369 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.369 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.627 19:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.189 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.446 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.704 00:11:48.704 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.704 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.705 19:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.963 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.963 { 00:11:48.963 "cntlid": 59, 00:11:48.963 "qid": 0, 00:11:48.963 "state": "enabled", 00:11:48.963 "thread": "nvmf_tgt_poll_group_000", 00:11:48.963 "listen_address": { 00:11:48.963 "trtype": "TCP", 00:11:48.963 "adrfam": "IPv4", 00:11:48.963 "traddr": "10.0.0.2", 00:11:48.963 "trsvcid": "4420" 00:11:48.963 }, 00:11:48.963 "peer_address": { 00:11:48.963 "trtype": "TCP", 00:11:48.963 "adrfam": "IPv4", 00:11:48.963 "traddr": "10.0.0.1", 00:11:48.963 "trsvcid": "49020" 00:11:48.963 }, 00:11:48.963 "auth": { 00:11:48.963 "state": "completed", 00:11:48.963 "digest": "sha384", 00:11:48.963 "dhgroup": "ffdhe2048" 00:11:48.963 } 00:11:48.963 } 00:11:48.963 ]' 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.220 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.477 19:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.042 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.607 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.866 00:11:50.866 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.866 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.866 19:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.124 { 00:11:51.124 "cntlid": 61, 00:11:51.124 "qid": 0, 00:11:51.124 "state": "enabled", 00:11:51.124 "thread": "nvmf_tgt_poll_group_000", 00:11:51.124 "listen_address": { 00:11:51.124 "trtype": "TCP", 00:11:51.124 "adrfam": "IPv4", 00:11:51.124 "traddr": "10.0.0.2", 00:11:51.124 "trsvcid": "4420" 00:11:51.124 }, 00:11:51.124 "peer_address": { 00:11:51.124 "trtype": "TCP", 00:11:51.124 "adrfam": "IPv4", 00:11:51.124 "traddr": "10.0.0.1", 00:11:51.124 "trsvcid": "49046" 00:11:51.124 }, 00:11:51.124 "auth": { 00:11:51.124 "state": "completed", 00:11:51.124 "digest": "sha384", 00:11:51.124 "dhgroup": "ffdhe2048" 00:11:51.124 } 00:11:51.124 } 00:11:51.124 ]' 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.124 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.381 19:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.311 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.579 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.837 00:11:52.837 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.837 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.837 19:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.094 { 00:11:53.094 "cntlid": 63, 00:11:53.094 "qid": 0, 00:11:53.094 "state": "enabled", 00:11:53.094 "thread": "nvmf_tgt_poll_group_000", 00:11:53.094 "listen_address": { 00:11:53.094 "trtype": "TCP", 00:11:53.094 "adrfam": "IPv4", 00:11:53.094 "traddr": "10.0.0.2", 00:11:53.094 "trsvcid": "4420" 00:11:53.094 }, 00:11:53.094 "peer_address": { 00:11:53.094 "trtype": "TCP", 00:11:53.094 "adrfam": "IPv4", 00:11:53.094 "traddr": "10.0.0.1", 00:11:53.094 "trsvcid": "49074" 00:11:53.094 }, 00:11:53.094 "auth": { 00:11:53.094 "state": "completed", 00:11:53.094 "digest": "sha384", 00:11:53.094 "dhgroup": "ffdhe2048" 00:11:53.094 } 00:11:53.094 } 00:11:53.094 ]' 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.094 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.351 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.351 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.351 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.608 19:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.174 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.432 19:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.039 00:11:55.039 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.039 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.039 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.296 { 00:11:55.296 "cntlid": 65, 00:11:55.296 "qid": 0, 00:11:55.296 "state": "enabled", 00:11:55.296 "thread": "nvmf_tgt_poll_group_000", 00:11:55.296 "listen_address": { 00:11:55.296 "trtype": "TCP", 00:11:55.296 "adrfam": "IPv4", 00:11:55.296 "traddr": "10.0.0.2", 00:11:55.296 "trsvcid": "4420" 00:11:55.296 }, 00:11:55.296 "peer_address": { 00:11:55.296 "trtype": "TCP", 00:11:55.296 "adrfam": "IPv4", 00:11:55.296 "traddr": "10.0.0.1", 00:11:55.296 "trsvcid": "56284" 00:11:55.296 }, 00:11:55.296 "auth": { 00:11:55.296 "state": "completed", 00:11:55.296 "digest": "sha384", 00:11:55.296 "dhgroup": "ffdhe3072" 00:11:55.296 } 00:11:55.296 } 00:11:55.296 ]' 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.296 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.554 19:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.505 19:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.071 00:11:57.071 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.071 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.071 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.329 { 00:11:57.329 "cntlid": 67, 00:11:57.329 "qid": 0, 00:11:57.329 "state": "enabled", 00:11:57.329 "thread": "nvmf_tgt_poll_group_000", 00:11:57.329 "listen_address": { 00:11:57.329 "trtype": "TCP", 00:11:57.329 "adrfam": "IPv4", 00:11:57.329 "traddr": "10.0.0.2", 00:11:57.329 "trsvcid": "4420" 00:11:57.329 }, 00:11:57.329 "peer_address": { 00:11:57.329 "trtype": "TCP", 00:11:57.329 "adrfam": "IPv4", 00:11:57.329 "traddr": "10.0.0.1", 00:11:57.329 "trsvcid": "56316" 00:11:57.329 }, 00:11:57.329 "auth": { 00:11:57.329 "state": "completed", 00:11:57.329 "digest": "sha384", 00:11:57.329 "dhgroup": "ffdhe3072" 00:11:57.329 } 00:11:57.329 } 00:11:57.329 ]' 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.329 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.895 19:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.462 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.721 19:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.979 00:11:58.979 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.979 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.979 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.237 { 00:11:59.237 "cntlid": 69, 00:11:59.237 "qid": 0, 00:11:59.237 "state": "enabled", 00:11:59.237 "thread": "nvmf_tgt_poll_group_000", 00:11:59.237 "listen_address": { 00:11:59.237 "trtype": "TCP", 00:11:59.237 "adrfam": "IPv4", 00:11:59.237 "traddr": "10.0.0.2", 00:11:59.237 "trsvcid": "4420" 00:11:59.237 }, 00:11:59.237 "peer_address": { 00:11:59.237 "trtype": "TCP", 00:11:59.237 "adrfam": "IPv4", 00:11:59.237 "traddr": "10.0.0.1", 00:11:59.237 "trsvcid": "56328" 00:11:59.237 }, 00:11:59.237 "auth": { 00:11:59.237 "state": "completed", 00:11:59.237 "digest": "sha384", 00:11:59.237 "dhgroup": "ffdhe3072" 00:11:59.237 } 00:11:59.237 } 00:11:59.237 ]' 00:11:59.237 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.495 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.755 19:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.319 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.586 19:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.861 00:12:00.861 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.861 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.861 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.426 { 00:12:01.426 "cntlid": 71, 00:12:01.426 "qid": 0, 00:12:01.426 "state": "enabled", 00:12:01.426 "thread": "nvmf_tgt_poll_group_000", 00:12:01.426 "listen_address": { 00:12:01.426 "trtype": "TCP", 00:12:01.426 "adrfam": "IPv4", 00:12:01.426 "traddr": "10.0.0.2", 00:12:01.426 "trsvcid": "4420" 00:12:01.426 }, 00:12:01.426 "peer_address": { 00:12:01.426 "trtype": "TCP", 00:12:01.426 "adrfam": "IPv4", 00:12:01.426 "traddr": "10.0.0.1", 00:12:01.426 "trsvcid": "56366" 00:12:01.426 }, 00:12:01.426 "auth": { 00:12:01.426 "state": "completed", 00:12:01.426 "digest": "sha384", 00:12:01.426 "dhgroup": "ffdhe3072" 00:12:01.426 } 00:12:01.426 } 00:12:01.426 ]' 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.426 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.683 19:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.616 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.874 19:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.132 00:12:03.132 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.132 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.132 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.391 { 00:12:03.391 "cntlid": 73, 00:12:03.391 "qid": 0, 00:12:03.391 "state": "enabled", 00:12:03.391 "thread": "nvmf_tgt_poll_group_000", 00:12:03.391 "listen_address": { 00:12:03.391 "trtype": "TCP", 00:12:03.391 "adrfam": "IPv4", 00:12:03.391 "traddr": "10.0.0.2", 00:12:03.391 "trsvcid": "4420" 00:12:03.391 }, 00:12:03.391 "peer_address": { 00:12:03.391 "trtype": "TCP", 00:12:03.391 "adrfam": "IPv4", 00:12:03.391 "traddr": "10.0.0.1", 00:12:03.391 "trsvcid": "56390" 00:12:03.391 }, 00:12:03.391 "auth": { 00:12:03.391 "state": "completed", 00:12:03.391 "digest": "sha384", 00:12:03.391 "dhgroup": "ffdhe4096" 00:12:03.391 } 00:12:03.391 } 00:12:03.391 ]' 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.391 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.650 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.650 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.650 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.650 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.650 19:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.908 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.472 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.729 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.730 19:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.988 00:12:04.988 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.988 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.988 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.554 { 00:12:05.554 "cntlid": 75, 00:12:05.554 "qid": 0, 00:12:05.554 "state": "enabled", 00:12:05.554 "thread": "nvmf_tgt_poll_group_000", 00:12:05.554 "listen_address": { 00:12:05.554 "trtype": "TCP", 00:12:05.554 "adrfam": "IPv4", 00:12:05.554 "traddr": "10.0.0.2", 00:12:05.554 "trsvcid": "4420" 00:12:05.554 }, 00:12:05.554 "peer_address": { 00:12:05.554 "trtype": "TCP", 00:12:05.554 "adrfam": "IPv4", 00:12:05.554 "traddr": "10.0.0.1", 00:12:05.554 "trsvcid": "45880" 00:12:05.554 }, 00:12:05.554 "auth": { 00:12:05.554 "state": "completed", 00:12:05.554 "digest": "sha384", 00:12:05.554 "dhgroup": "ffdhe4096" 00:12:05.554 } 00:12:05.554 } 00:12:05.554 ]' 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.554 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.555 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.555 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.555 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.555 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.555 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.813 19:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.487 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.746 19:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.004 00:12:07.262 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.262 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.262 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.520 { 00:12:07.520 "cntlid": 77, 00:12:07.520 "qid": 0, 00:12:07.520 "state": "enabled", 00:12:07.520 "thread": "nvmf_tgt_poll_group_000", 00:12:07.520 "listen_address": { 00:12:07.520 "trtype": "TCP", 00:12:07.520 "adrfam": "IPv4", 00:12:07.520 "traddr": "10.0.0.2", 00:12:07.520 "trsvcid": "4420" 00:12:07.520 }, 00:12:07.520 "peer_address": { 00:12:07.520 "trtype": "TCP", 00:12:07.520 "adrfam": "IPv4", 00:12:07.520 "traddr": "10.0.0.1", 00:12:07.520 "trsvcid": "45922" 00:12:07.520 }, 00:12:07.520 "auth": { 00:12:07.520 "state": "completed", 00:12:07.520 "digest": "sha384", 00:12:07.520 "dhgroup": "ffdhe4096" 00:12:07.520 } 00:12:07.520 } 00:12:07.520 ]' 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.520 19:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.778 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.713 19:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.971 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.229 00:12:09.229 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.229 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.229 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.488 { 00:12:09.488 "cntlid": 79, 00:12:09.488 "qid": 0, 00:12:09.488 "state": "enabled", 00:12:09.488 "thread": "nvmf_tgt_poll_group_000", 00:12:09.488 "listen_address": { 00:12:09.488 "trtype": "TCP", 00:12:09.488 "adrfam": "IPv4", 00:12:09.488 "traddr": "10.0.0.2", 00:12:09.488 "trsvcid": "4420" 00:12:09.488 }, 00:12:09.488 "peer_address": { 00:12:09.488 "trtype": "TCP", 00:12:09.488 "adrfam": "IPv4", 00:12:09.488 "traddr": "10.0.0.1", 00:12:09.488 "trsvcid": "45936" 00:12:09.488 }, 00:12:09.488 "auth": { 00:12:09.488 "state": "completed", 00:12:09.488 "digest": "sha384", 00:12:09.488 "dhgroup": "ffdhe4096" 00:12:09.488 } 00:12:09.488 } 00:12:09.488 ]' 00:12:09.488 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.746 19:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.004 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:10.570 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.570 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:10.570 19:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.570 19:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.829 19:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.829 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.829 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.829 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:10.829 19:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.087 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.347 00:12:11.347 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.347 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.347 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.605 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.605 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.605 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.605 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.864 { 00:12:11.864 "cntlid": 81, 00:12:11.864 "qid": 0, 00:12:11.864 "state": "enabled", 00:12:11.864 "thread": "nvmf_tgt_poll_group_000", 00:12:11.864 "listen_address": { 00:12:11.864 "trtype": "TCP", 00:12:11.864 "adrfam": "IPv4", 00:12:11.864 "traddr": "10.0.0.2", 00:12:11.864 "trsvcid": "4420" 00:12:11.864 }, 00:12:11.864 "peer_address": { 00:12:11.864 "trtype": "TCP", 00:12:11.864 "adrfam": "IPv4", 00:12:11.864 "traddr": "10.0.0.1", 00:12:11.864 "trsvcid": "45962" 00:12:11.864 }, 00:12:11.864 "auth": { 00:12:11.864 "state": "completed", 00:12:11.864 "digest": "sha384", 00:12:11.864 "dhgroup": "ffdhe6144" 00:12:11.864 } 00:12:11.864 } 00:12:11.864 ]' 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.864 19:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.864 19:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.864 19:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.864 19:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.145 19:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.075 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.639 00:12:13.639 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.639 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.639 19:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.897 { 00:12:13.897 "cntlid": 83, 00:12:13.897 "qid": 0, 00:12:13.897 "state": "enabled", 00:12:13.897 "thread": "nvmf_tgt_poll_group_000", 00:12:13.897 "listen_address": { 00:12:13.897 "trtype": "TCP", 00:12:13.897 "adrfam": "IPv4", 00:12:13.897 "traddr": "10.0.0.2", 00:12:13.897 "trsvcid": "4420" 00:12:13.897 }, 00:12:13.897 "peer_address": { 00:12:13.897 "trtype": "TCP", 00:12:13.897 "adrfam": "IPv4", 00:12:13.897 "traddr": "10.0.0.1", 00:12:13.897 "trsvcid": "46000" 00:12:13.897 }, 00:12:13.897 "auth": { 00:12:13.897 "state": "completed", 00:12:13.897 "digest": "sha384", 00:12:13.897 "dhgroup": "ffdhe6144" 00:12:13.897 } 00:12:13.897 } 00:12:13.897 ]' 00:12:13.897 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.155 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.413 19:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:14.979 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.237 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.496 19:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.496 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.496 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.758 00:12:15.758 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.758 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.758 19:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.024 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.024 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.024 19:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.024 19:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.282 19:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.282 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.282 { 00:12:16.282 "cntlid": 85, 00:12:16.282 "qid": 0, 00:12:16.282 "state": "enabled", 00:12:16.282 "thread": "nvmf_tgt_poll_group_000", 00:12:16.282 "listen_address": { 00:12:16.282 "trtype": "TCP", 00:12:16.282 "adrfam": "IPv4", 00:12:16.282 "traddr": "10.0.0.2", 00:12:16.282 "trsvcid": "4420" 00:12:16.282 }, 00:12:16.282 "peer_address": { 00:12:16.282 "trtype": "TCP", 00:12:16.282 "adrfam": "IPv4", 00:12:16.282 "traddr": "10.0.0.1", 00:12:16.282 "trsvcid": "41424" 00:12:16.282 }, 00:12:16.282 "auth": { 00:12:16.282 "state": "completed", 00:12:16.282 "digest": "sha384", 00:12:16.282 "dhgroup": "ffdhe6144" 00:12:16.282 } 00:12:16.282 } 00:12:16.282 ]' 00:12:16.282 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.282 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.283 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.541 19:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.474 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.475 19:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.039 00:12:18.039 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.039 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.039 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.297 { 00:12:18.297 "cntlid": 87, 00:12:18.297 "qid": 0, 00:12:18.297 "state": "enabled", 00:12:18.297 "thread": "nvmf_tgt_poll_group_000", 00:12:18.297 "listen_address": { 00:12:18.297 "trtype": "TCP", 00:12:18.297 "adrfam": "IPv4", 00:12:18.297 "traddr": "10.0.0.2", 00:12:18.297 "trsvcid": "4420" 00:12:18.297 }, 00:12:18.297 "peer_address": { 00:12:18.297 "trtype": "TCP", 00:12:18.297 "adrfam": "IPv4", 00:12:18.297 "traddr": "10.0.0.1", 00:12:18.297 "trsvcid": "41458" 00:12:18.297 }, 00:12:18.297 "auth": { 00:12:18.297 "state": "completed", 00:12:18.297 "digest": "sha384", 00:12:18.297 "dhgroup": "ffdhe6144" 00:12:18.297 } 00:12:18.297 } 00:12:18.297 ]' 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.297 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.555 19:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:19.487 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:19.744 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.745 19:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.361 00:12:20.619 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.619 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.619 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.878 { 00:12:20.878 "cntlid": 89, 00:12:20.878 "qid": 0, 00:12:20.878 "state": "enabled", 00:12:20.878 "thread": "nvmf_tgt_poll_group_000", 00:12:20.878 "listen_address": { 00:12:20.878 "trtype": "TCP", 00:12:20.878 "adrfam": "IPv4", 00:12:20.878 "traddr": "10.0.0.2", 00:12:20.878 "trsvcid": "4420" 00:12:20.878 }, 00:12:20.878 "peer_address": { 00:12:20.878 "trtype": "TCP", 00:12:20.878 "adrfam": "IPv4", 00:12:20.878 "traddr": "10.0.0.1", 00:12:20.878 "trsvcid": "41496" 00:12:20.878 }, 00:12:20.878 "auth": { 00:12:20.878 "state": "completed", 00:12:20.878 "digest": "sha384", 00:12:20.878 "dhgroup": "ffdhe8192" 00:12:20.878 } 00:12:20.878 } 00:12:20.878 ]' 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.878 19:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.878 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.878 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.878 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.878 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.878 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.136 19:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.070 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.390 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:22.390 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.390 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:22.390 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:22.390 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.391 19:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.955 00:12:22.955 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.955 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.955 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.213 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.213 { 00:12:23.213 "cntlid": 91, 00:12:23.213 "qid": 0, 00:12:23.213 "state": "enabled", 00:12:23.213 "thread": "nvmf_tgt_poll_group_000", 00:12:23.213 "listen_address": { 00:12:23.213 "trtype": "TCP", 00:12:23.213 "adrfam": "IPv4", 00:12:23.213 "traddr": "10.0.0.2", 00:12:23.213 "trsvcid": "4420" 00:12:23.213 }, 00:12:23.213 "peer_address": { 00:12:23.213 "trtype": "TCP", 00:12:23.213 "adrfam": "IPv4", 00:12:23.213 "traddr": "10.0.0.1", 00:12:23.213 "trsvcid": "41530" 00:12:23.213 }, 00:12:23.213 "auth": { 00:12:23.213 "state": "completed", 00:12:23.213 "digest": "sha384", 00:12:23.214 "dhgroup": "ffdhe8192" 00:12:23.214 } 00:12:23.214 } 00:12:23.214 ]' 00:12:23.214 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.214 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.214 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.214 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.214 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.472 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.472 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.472 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.729 19:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.295 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.553 19:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.119 00:12:25.119 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.119 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.119 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.376 { 00:12:25.376 "cntlid": 93, 00:12:25.376 "qid": 0, 00:12:25.376 "state": "enabled", 00:12:25.376 "thread": "nvmf_tgt_poll_group_000", 00:12:25.376 "listen_address": { 00:12:25.376 "trtype": "TCP", 00:12:25.376 "adrfam": "IPv4", 00:12:25.376 "traddr": "10.0.0.2", 00:12:25.376 "trsvcid": "4420" 00:12:25.376 }, 00:12:25.376 "peer_address": { 00:12:25.376 "trtype": "TCP", 00:12:25.376 "adrfam": "IPv4", 00:12:25.376 "traddr": "10.0.0.1", 00:12:25.376 "trsvcid": "55456" 00:12:25.376 }, 00:12:25.376 "auth": { 00:12:25.376 "state": "completed", 00:12:25.376 "digest": "sha384", 00:12:25.376 "dhgroup": "ffdhe8192" 00:12:25.376 } 00:12:25.376 } 00:12:25.376 ]' 00:12:25.376 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.634 19:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.892 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:26.828 19:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.828 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.829 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.829 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.396 00:12:27.396 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.396 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.396 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.656 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.656 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.656 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.656 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.914 19:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.914 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.914 { 00:12:27.914 "cntlid": 95, 00:12:27.914 "qid": 0, 00:12:27.914 "state": "enabled", 00:12:27.914 "thread": "nvmf_tgt_poll_group_000", 00:12:27.914 "listen_address": { 00:12:27.914 "trtype": "TCP", 00:12:27.914 "adrfam": "IPv4", 00:12:27.914 "traddr": "10.0.0.2", 00:12:27.914 "trsvcid": "4420" 00:12:27.914 }, 00:12:27.914 "peer_address": { 00:12:27.914 "trtype": "TCP", 00:12:27.914 "adrfam": "IPv4", 00:12:27.914 "traddr": "10.0.0.1", 00:12:27.914 "trsvcid": "55484" 00:12:27.914 }, 00:12:27.914 "auth": { 00:12:27.914 "state": "completed", 00:12:27.914 "digest": "sha384", 00:12:27.914 "dhgroup": "ffdhe8192" 00:12:27.914 } 00:12:27.914 } 00:12:27.914 ]' 00:12:27.914 19:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.914 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.172 19:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.104 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.363 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.620 00:12:29.620 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.620 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.620 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.879 { 00:12:29.879 "cntlid": 97, 00:12:29.879 "qid": 0, 00:12:29.879 "state": "enabled", 00:12:29.879 "thread": "nvmf_tgt_poll_group_000", 00:12:29.879 "listen_address": { 00:12:29.879 "trtype": "TCP", 00:12:29.879 "adrfam": "IPv4", 00:12:29.879 "traddr": "10.0.0.2", 00:12:29.879 "trsvcid": "4420" 00:12:29.879 }, 00:12:29.879 "peer_address": { 00:12:29.879 "trtype": "TCP", 00:12:29.879 "adrfam": "IPv4", 00:12:29.879 "traddr": "10.0.0.1", 00:12:29.879 "trsvcid": "55516" 00:12:29.879 }, 00:12:29.879 "auth": { 00:12:29.879 "state": "completed", 00:12:29.879 "digest": "sha512", 00:12:29.879 "dhgroup": "null" 00:12:29.879 } 00:12:29.879 } 00:12:29.879 ]' 00:12:29.879 19:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.879 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.136 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:31.067 19:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.067 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.324 00:12:31.324 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.324 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.324 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.581 { 00:12:31.581 "cntlid": 99, 00:12:31.581 "qid": 0, 00:12:31.581 "state": "enabled", 00:12:31.581 "thread": "nvmf_tgt_poll_group_000", 00:12:31.581 "listen_address": { 00:12:31.581 "trtype": "TCP", 00:12:31.581 "adrfam": "IPv4", 00:12:31.581 "traddr": "10.0.0.2", 00:12:31.581 "trsvcid": "4420" 00:12:31.581 }, 00:12:31.581 "peer_address": { 00:12:31.581 "trtype": "TCP", 00:12:31.581 "adrfam": "IPv4", 00:12:31.581 "traddr": "10.0.0.1", 00:12:31.581 "trsvcid": "55540" 00:12:31.581 }, 00:12:31.581 "auth": { 00:12:31.581 "state": "completed", 00:12:31.581 "digest": "sha512", 00:12:31.581 "dhgroup": "null" 00:12:31.581 } 00:12:31.581 } 00:12:31.581 ]' 00:12:31.581 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.880 19:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.137 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.702 19:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.960 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.218 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.218 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.218 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.477 00:12:33.477 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.477 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.477 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.735 { 00:12:33.735 "cntlid": 101, 00:12:33.735 "qid": 0, 00:12:33.735 "state": "enabled", 00:12:33.735 "thread": "nvmf_tgt_poll_group_000", 00:12:33.735 "listen_address": { 00:12:33.735 "trtype": "TCP", 00:12:33.735 "adrfam": "IPv4", 00:12:33.735 "traddr": "10.0.0.2", 00:12:33.735 "trsvcid": "4420" 00:12:33.735 }, 00:12:33.735 "peer_address": { 00:12:33.735 "trtype": "TCP", 00:12:33.735 "adrfam": "IPv4", 00:12:33.735 "traddr": "10.0.0.1", 00:12:33.735 "trsvcid": "55564" 00:12:33.735 }, 00:12:33.735 "auth": { 00:12:33.735 "state": "completed", 00:12:33.735 "digest": "sha512", 00:12:33.735 "dhgroup": "null" 00:12:33.735 } 00:12:33.735 } 00:12:33.735 ]' 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:33.735 19:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.735 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.735 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.735 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.994 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.929 19:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.929 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.188 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.188 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:35.188 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:35.446 00:12:35.446 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.446 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.446 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.792 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.793 { 00:12:35.793 "cntlid": 103, 00:12:35.793 "qid": 0, 00:12:35.793 "state": "enabled", 00:12:35.793 "thread": "nvmf_tgt_poll_group_000", 00:12:35.793 "listen_address": { 00:12:35.793 "trtype": "TCP", 00:12:35.793 "adrfam": "IPv4", 00:12:35.793 "traddr": "10.0.0.2", 00:12:35.793 "trsvcid": "4420" 00:12:35.793 }, 00:12:35.793 "peer_address": { 00:12:35.793 "trtype": "TCP", 00:12:35.793 "adrfam": "IPv4", 00:12:35.793 "traddr": "10.0.0.1", 00:12:35.793 "trsvcid": "57662" 00:12:35.793 }, 00:12:35.793 "auth": { 00:12:35.793 "state": "completed", 00:12:35.793 "digest": "sha512", 00:12:35.793 "dhgroup": "null" 00:12:35.793 } 00:12:35.793 } 00:12:35.793 ]' 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.793 19:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.051 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:36.985 19:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.985 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.552 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.552 { 00:12:37.552 "cntlid": 105, 00:12:37.552 "qid": 0, 00:12:37.552 "state": "enabled", 00:12:37.552 "thread": "nvmf_tgt_poll_group_000", 00:12:37.552 "listen_address": { 00:12:37.552 "trtype": "TCP", 00:12:37.552 "adrfam": "IPv4", 00:12:37.552 "traddr": "10.0.0.2", 00:12:37.552 "trsvcid": "4420" 00:12:37.552 }, 00:12:37.552 "peer_address": { 00:12:37.552 "trtype": "TCP", 00:12:37.552 "adrfam": "IPv4", 00:12:37.552 "traddr": "10.0.0.1", 00:12:37.552 "trsvcid": "57684" 00:12:37.552 }, 00:12:37.552 "auth": { 00:12:37.552 "state": "completed", 00:12:37.552 "digest": "sha512", 00:12:37.552 "dhgroup": "ffdhe2048" 00:12:37.552 } 00:12:37.552 } 00:12:37.552 ]' 00:12:37.552 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.811 19:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.069 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:38.635 19:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.894 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.152 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.152 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.152 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.409 00:12:39.409 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.409 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.409 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.666 { 00:12:39.666 "cntlid": 107, 00:12:39.666 "qid": 0, 00:12:39.666 "state": "enabled", 00:12:39.666 "thread": "nvmf_tgt_poll_group_000", 00:12:39.666 "listen_address": { 00:12:39.666 "trtype": "TCP", 00:12:39.666 "adrfam": "IPv4", 00:12:39.666 "traddr": "10.0.0.2", 00:12:39.666 "trsvcid": "4420" 00:12:39.666 }, 00:12:39.666 "peer_address": { 00:12:39.666 "trtype": "TCP", 00:12:39.666 "adrfam": "IPv4", 00:12:39.666 "traddr": "10.0.0.1", 00:12:39.666 "trsvcid": "57718" 00:12:39.666 }, 00:12:39.666 "auth": { 00:12:39.666 "state": "completed", 00:12:39.666 "digest": "sha512", 00:12:39.666 "dhgroup": "ffdhe2048" 00:12:39.666 } 00:12:39.666 } 00:12:39.666 ]' 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.666 19:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.253 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:40.819 19:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.077 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.335 00:12:41.335 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.335 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.335 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.593 { 00:12:41.593 "cntlid": 109, 00:12:41.593 "qid": 0, 00:12:41.593 "state": "enabled", 00:12:41.593 "thread": "nvmf_tgt_poll_group_000", 00:12:41.593 "listen_address": { 00:12:41.593 "trtype": "TCP", 00:12:41.593 "adrfam": "IPv4", 00:12:41.593 "traddr": "10.0.0.2", 00:12:41.593 "trsvcid": "4420" 00:12:41.593 }, 00:12:41.593 "peer_address": { 00:12:41.593 "trtype": "TCP", 00:12:41.593 "adrfam": "IPv4", 00:12:41.593 "traddr": "10.0.0.1", 00:12:41.593 "trsvcid": "57766" 00:12:41.593 }, 00:12:41.593 "auth": { 00:12:41.593 "state": "completed", 00:12:41.593 "digest": "sha512", 00:12:41.593 "dhgroup": "ffdhe2048" 00:12:41.593 } 00:12:41.593 } 00:12:41.593 ]' 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.593 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.852 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:41.852 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.852 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.852 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.852 19:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.109 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:42.676 19:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.934 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.500 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.500 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.500 { 00:12:43.500 "cntlid": 111, 00:12:43.500 "qid": 0, 00:12:43.500 "state": "enabled", 00:12:43.500 "thread": "nvmf_tgt_poll_group_000", 00:12:43.500 "listen_address": { 00:12:43.500 "trtype": "TCP", 00:12:43.500 "adrfam": "IPv4", 00:12:43.500 "traddr": "10.0.0.2", 00:12:43.500 "trsvcid": "4420" 00:12:43.500 }, 00:12:43.500 "peer_address": { 00:12:43.500 "trtype": "TCP", 00:12:43.500 "adrfam": "IPv4", 00:12:43.500 "traddr": "10.0.0.1", 00:12:43.500 "trsvcid": "57792" 00:12:43.500 }, 00:12:43.500 "auth": { 00:12:43.500 "state": "completed", 00:12:43.500 "digest": "sha512", 00:12:43.500 "dhgroup": "ffdhe2048" 00:12:43.500 } 00:12:43.500 } 00:12:43.500 ]' 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.757 19:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.014 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:44.580 19:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.839 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.164 00:12:45.164 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.164 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.164 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.730 { 00:12:45.730 "cntlid": 113, 00:12:45.730 "qid": 0, 00:12:45.730 "state": "enabled", 00:12:45.730 "thread": "nvmf_tgt_poll_group_000", 00:12:45.730 "listen_address": { 00:12:45.730 "trtype": "TCP", 00:12:45.730 "adrfam": "IPv4", 00:12:45.730 "traddr": "10.0.0.2", 00:12:45.730 "trsvcid": "4420" 00:12:45.730 }, 00:12:45.730 "peer_address": { 00:12:45.730 "trtype": "TCP", 00:12:45.730 "adrfam": "IPv4", 00:12:45.730 "traddr": "10.0.0.1", 00:12:45.730 "trsvcid": "36822" 00:12:45.730 }, 00:12:45.730 "auth": { 00:12:45.730 "state": "completed", 00:12:45.730 "digest": "sha512", 00:12:45.730 "dhgroup": "ffdhe3072" 00:12:45.730 } 00:12:45.730 } 00:12:45.730 ]' 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.730 19:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.988 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.917 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:46.918 19:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.175 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.433 00:12:47.433 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.433 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.433 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.691 { 00:12:47.691 "cntlid": 115, 00:12:47.691 "qid": 0, 00:12:47.691 "state": "enabled", 00:12:47.691 "thread": "nvmf_tgt_poll_group_000", 00:12:47.691 "listen_address": { 00:12:47.691 "trtype": "TCP", 00:12:47.691 "adrfam": "IPv4", 00:12:47.691 "traddr": "10.0.0.2", 00:12:47.691 "trsvcid": "4420" 00:12:47.691 }, 00:12:47.691 "peer_address": { 00:12:47.691 "trtype": "TCP", 00:12:47.691 "adrfam": "IPv4", 00:12:47.691 "traddr": "10.0.0.1", 00:12:47.691 "trsvcid": "36850" 00:12:47.691 }, 00:12:47.691 "auth": { 00:12:47.691 "state": "completed", 00:12:47.691 "digest": "sha512", 00:12:47.691 "dhgroup": "ffdhe3072" 00:12:47.691 } 00:12:47.691 } 00:12:47.691 ]' 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.691 19:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.948 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.948 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.948 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.948 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.948 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.206 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:48.770 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.770 19:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:48.770 19:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.770 19:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.770 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.770 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.770 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:48.770 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.027 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:49.027 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.027 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.027 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:49.027 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.028 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.285 00:12:49.542 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.542 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.542 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.800 { 00:12:49.800 "cntlid": 117, 00:12:49.800 "qid": 0, 00:12:49.800 "state": "enabled", 00:12:49.800 "thread": "nvmf_tgt_poll_group_000", 00:12:49.800 "listen_address": { 00:12:49.800 "trtype": "TCP", 00:12:49.800 "adrfam": "IPv4", 00:12:49.800 "traddr": "10.0.0.2", 00:12:49.800 "trsvcid": "4420" 00:12:49.800 }, 00:12:49.800 "peer_address": { 00:12:49.800 "trtype": "TCP", 00:12:49.800 "adrfam": "IPv4", 00:12:49.800 "traddr": "10.0.0.1", 00:12:49.800 "trsvcid": "36878" 00:12:49.800 }, 00:12:49.800 "auth": { 00:12:49.800 "state": "completed", 00:12:49.800 "digest": "sha512", 00:12:49.800 "dhgroup": "ffdhe3072" 00:12:49.800 } 00:12:49.800 } 00:12:49.800 ]' 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.800 19:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.800 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.800 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.800 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.057 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:50.623 19:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.881 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.447 00:12:51.447 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.447 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.447 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.705 { 00:12:51.705 "cntlid": 119, 00:12:51.705 "qid": 0, 00:12:51.705 "state": "enabled", 00:12:51.705 "thread": "nvmf_tgt_poll_group_000", 00:12:51.705 "listen_address": { 00:12:51.705 "trtype": "TCP", 00:12:51.705 "adrfam": "IPv4", 00:12:51.705 "traddr": "10.0.0.2", 00:12:51.705 "trsvcid": "4420" 00:12:51.705 }, 00:12:51.705 "peer_address": { 00:12:51.705 "trtype": "TCP", 00:12:51.705 "adrfam": "IPv4", 00:12:51.705 "traddr": "10.0.0.1", 00:12:51.705 "trsvcid": "36910" 00:12:51.705 }, 00:12:51.705 "auth": { 00:12:51.705 "state": "completed", 00:12:51.705 "digest": "sha512", 00:12:51.705 "dhgroup": "ffdhe3072" 00:12:51.705 } 00:12:51.705 } 00:12:51.705 ]' 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.705 19:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.964 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:12:52.531 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:52.789 19:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.046 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.047 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.047 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.304 00:12:53.304 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.304 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.304 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.562 { 00:12:53.562 "cntlid": 121, 00:12:53.562 "qid": 0, 00:12:53.562 "state": "enabled", 00:12:53.562 "thread": "nvmf_tgt_poll_group_000", 00:12:53.562 "listen_address": { 00:12:53.562 "trtype": "TCP", 00:12:53.562 "adrfam": "IPv4", 00:12:53.562 "traddr": "10.0.0.2", 00:12:53.562 "trsvcid": "4420" 00:12:53.562 }, 00:12:53.562 "peer_address": { 00:12:53.562 "trtype": "TCP", 00:12:53.562 "adrfam": "IPv4", 00:12:53.562 "traddr": "10.0.0.1", 00:12:53.562 "trsvcid": "36938" 00:12:53.562 }, 00:12:53.562 "auth": { 00:12:53.562 "state": "completed", 00:12:53.562 "digest": "sha512", 00:12:53.562 "dhgroup": "ffdhe4096" 00:12:53.562 } 00:12:53.562 } 00:12:53.562 ]' 00:12:53.562 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.820 19:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.078 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.668 19:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.927 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.185 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.185 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.185 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.443 00:12:55.443 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.443 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.443 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.701 { 00:12:55.701 "cntlid": 123, 00:12:55.701 "qid": 0, 00:12:55.701 "state": "enabled", 00:12:55.701 "thread": "nvmf_tgt_poll_group_000", 00:12:55.701 "listen_address": { 00:12:55.701 "trtype": "TCP", 00:12:55.701 "adrfam": "IPv4", 00:12:55.701 "traddr": "10.0.0.2", 00:12:55.701 "trsvcid": "4420" 00:12:55.701 }, 00:12:55.701 "peer_address": { 00:12:55.701 "trtype": "TCP", 00:12:55.701 "adrfam": "IPv4", 00:12:55.701 "traddr": "10.0.0.1", 00:12:55.701 "trsvcid": "52694" 00:12:55.701 }, 00:12:55.701 "auth": { 00:12:55.701 "state": "completed", 00:12:55.701 "digest": "sha512", 00:12:55.701 "dhgroup": "ffdhe4096" 00:12:55.701 } 00:12:55.701 } 00:12:55.701 ]' 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.701 19:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.959 19:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.959 19:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.959 19:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.217 19:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:12:56.780 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:56.781 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.038 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.295 19:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.295 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.295 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.744 00:12:57.744 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.744 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.744 19:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.004 { 00:12:58.004 "cntlid": 125, 00:12:58.004 "qid": 0, 00:12:58.004 "state": "enabled", 00:12:58.004 "thread": "nvmf_tgt_poll_group_000", 00:12:58.004 "listen_address": { 00:12:58.004 "trtype": "TCP", 00:12:58.004 "adrfam": "IPv4", 00:12:58.004 "traddr": "10.0.0.2", 00:12:58.004 "trsvcid": "4420" 00:12:58.004 }, 00:12:58.004 "peer_address": { 00:12:58.004 "trtype": "TCP", 00:12:58.004 "adrfam": "IPv4", 00:12:58.004 "traddr": "10.0.0.1", 00:12:58.004 "trsvcid": "52718" 00:12:58.004 }, 00:12:58.004 "auth": { 00:12:58.004 "state": "completed", 00:12:58.004 "digest": "sha512", 00:12:58.004 "dhgroup": "ffdhe4096" 00:12:58.004 } 00:12:58.004 } 00:12:58.004 ]' 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.004 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.263 19:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.197 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.455 19:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.455 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.455 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.715 00:12:59.715 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.715 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.715 19:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.974 { 00:12:59.974 "cntlid": 127, 00:12:59.974 "qid": 0, 00:12:59.974 "state": "enabled", 00:12:59.974 "thread": "nvmf_tgt_poll_group_000", 00:12:59.974 "listen_address": { 00:12:59.974 "trtype": "TCP", 00:12:59.974 "adrfam": "IPv4", 00:12:59.974 "traddr": "10.0.0.2", 00:12:59.974 "trsvcid": "4420" 00:12:59.974 }, 00:12:59.974 "peer_address": { 00:12:59.974 "trtype": "TCP", 00:12:59.974 "adrfam": "IPv4", 00:12:59.974 "traddr": "10.0.0.1", 00:12:59.974 "trsvcid": "52750" 00:12:59.974 }, 00:12:59.974 "auth": { 00:12:59.974 "state": "completed", 00:12:59.974 "digest": "sha512", 00:12:59.974 "dhgroup": "ffdhe4096" 00:12:59.974 } 00:12:59.974 } 00:12:59.974 ]' 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.974 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.540 19:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.108 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.367 19:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.934 00:13:01.934 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.934 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.934 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.193 { 00:13:02.193 "cntlid": 129, 00:13:02.193 "qid": 0, 00:13:02.193 "state": "enabled", 00:13:02.193 "thread": "nvmf_tgt_poll_group_000", 00:13:02.193 "listen_address": { 00:13:02.193 "trtype": "TCP", 00:13:02.193 "adrfam": "IPv4", 00:13:02.193 "traddr": "10.0.0.2", 00:13:02.193 "trsvcid": "4420" 00:13:02.193 }, 00:13:02.193 "peer_address": { 00:13:02.193 "trtype": "TCP", 00:13:02.193 "adrfam": "IPv4", 00:13:02.193 "traddr": "10.0.0.1", 00:13:02.193 "trsvcid": "52774" 00:13:02.193 }, 00:13:02.193 "auth": { 00:13:02.193 "state": "completed", 00:13:02.193 "digest": "sha512", 00:13:02.193 "dhgroup": "ffdhe6144" 00:13:02.193 } 00:13:02.193 } 00:13:02.193 ]' 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.193 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.453 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.453 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.453 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.453 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.453 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.711 19:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.290 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.879 19:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.137 00:13:04.137 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.137 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.137 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.394 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.394 { 00:13:04.394 "cntlid": 131, 00:13:04.394 "qid": 0, 00:13:04.394 "state": "enabled", 00:13:04.394 "thread": "nvmf_tgt_poll_group_000", 00:13:04.394 "listen_address": { 00:13:04.394 "trtype": "TCP", 00:13:04.394 "adrfam": "IPv4", 00:13:04.395 "traddr": "10.0.0.2", 00:13:04.395 "trsvcid": "4420" 00:13:04.395 }, 00:13:04.395 "peer_address": { 00:13:04.395 "trtype": "TCP", 00:13:04.395 "adrfam": "IPv4", 00:13:04.395 "traddr": "10.0.0.1", 00:13:04.395 "trsvcid": "52802" 00:13:04.395 }, 00:13:04.395 "auth": { 00:13:04.395 "state": "completed", 00:13:04.395 "digest": "sha512", 00:13:04.395 "dhgroup": "ffdhe6144" 00:13:04.395 } 00:13:04.395 } 00:13:04.395 ]' 00:13:04.395 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.395 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.395 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.395 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.395 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.652 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.652 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.652 19:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.910 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.477 19:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.736 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.994 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.994 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.994 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.252 00:13:06.252 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.252 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.252 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.511 { 00:13:06.511 "cntlid": 133, 00:13:06.511 "qid": 0, 00:13:06.511 "state": "enabled", 00:13:06.511 "thread": "nvmf_tgt_poll_group_000", 00:13:06.511 "listen_address": { 00:13:06.511 "trtype": "TCP", 00:13:06.511 "adrfam": "IPv4", 00:13:06.511 "traddr": "10.0.0.2", 00:13:06.511 "trsvcid": "4420" 00:13:06.511 }, 00:13:06.511 "peer_address": { 00:13:06.511 "trtype": "TCP", 00:13:06.511 "adrfam": "IPv4", 00:13:06.511 "traddr": "10.0.0.1", 00:13:06.511 "trsvcid": "50096" 00:13:06.511 }, 00:13:06.511 "auth": { 00:13:06.511 "state": "completed", 00:13:06.511 "digest": "sha512", 00:13:06.511 "dhgroup": "ffdhe6144" 00:13:06.511 } 00:13:06.511 } 00:13:06.511 ]' 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.511 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.769 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.769 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.769 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.769 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.769 19:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.026 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:13:07.592 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:07.850 19:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:08.173 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:08.431 00:13:08.431 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.431 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.431 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.689 { 00:13:08.689 "cntlid": 135, 00:13:08.689 "qid": 0, 00:13:08.689 "state": "enabled", 00:13:08.689 "thread": "nvmf_tgt_poll_group_000", 00:13:08.689 "listen_address": { 00:13:08.689 "trtype": "TCP", 00:13:08.689 "adrfam": "IPv4", 00:13:08.689 "traddr": "10.0.0.2", 00:13:08.689 "trsvcid": "4420" 00:13:08.689 }, 00:13:08.689 "peer_address": { 00:13:08.689 "trtype": "TCP", 00:13:08.689 "adrfam": "IPv4", 00:13:08.689 "traddr": "10.0.0.1", 00:13:08.689 "trsvcid": "50118" 00:13:08.689 }, 00:13:08.689 "auth": { 00:13:08.689 "state": "completed", 00:13:08.689 "digest": "sha512", 00:13:08.689 "dhgroup": "ffdhe6144" 00:13:08.689 } 00:13:08.689 } 00:13:08.689 ]' 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.689 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.946 19:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.946 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.946 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.946 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.204 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:09.772 19:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.030 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.965 00:13:10.965 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.965 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.965 19:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.965 { 00:13:10.965 "cntlid": 137, 00:13:10.965 "qid": 0, 00:13:10.965 "state": "enabled", 00:13:10.965 "thread": "nvmf_tgt_poll_group_000", 00:13:10.965 "listen_address": { 00:13:10.965 "trtype": "TCP", 00:13:10.965 "adrfam": "IPv4", 00:13:10.965 "traddr": "10.0.0.2", 00:13:10.965 "trsvcid": "4420" 00:13:10.965 }, 00:13:10.965 "peer_address": { 00:13:10.965 "trtype": "TCP", 00:13:10.965 "adrfam": "IPv4", 00:13:10.965 "traddr": "10.0.0.1", 00:13:10.965 "trsvcid": "50136" 00:13:10.965 }, 00:13:10.965 "auth": { 00:13:10.965 "state": "completed", 00:13:10.965 "digest": "sha512", 00:13:10.965 "dhgroup": "ffdhe8192" 00:13:10.965 } 00:13:10.965 } 00:13:10.965 ]' 00:13:10.965 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.224 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.482 19:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.048 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.307 19:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.241 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.241 { 00:13:13.241 "cntlid": 139, 00:13:13.241 "qid": 0, 00:13:13.241 "state": "enabled", 00:13:13.241 "thread": "nvmf_tgt_poll_group_000", 00:13:13.241 "listen_address": { 00:13:13.241 "trtype": "TCP", 00:13:13.241 "adrfam": "IPv4", 00:13:13.241 "traddr": "10.0.0.2", 00:13:13.241 "trsvcid": "4420" 00:13:13.241 }, 00:13:13.241 "peer_address": { 00:13:13.241 "trtype": "TCP", 00:13:13.241 "adrfam": "IPv4", 00:13:13.241 "traddr": "10.0.0.1", 00:13:13.241 "trsvcid": "50152" 00:13:13.241 }, 00:13:13.241 "auth": { 00:13:13.241 "state": "completed", 00:13:13.241 "digest": "sha512", 00:13:13.241 "dhgroup": "ffdhe8192" 00:13:13.241 } 00:13:13.241 } 00:13:13.241 ]' 00:13:13.241 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.499 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.499 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.499 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.500 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.500 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.500 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.500 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.758 19:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:01:NWNlZWIzNWE1ZDM5MGYyMzJjYjlhYmViYTlmMjUzOTE6zOXm: --dhchap-ctrl-secret DHHC-1:02:OTM0NzJkNDViNmM1MjRmOGJlMmE3NGUxZDlhMDJhMzI2ZWQwYWY4ZGU2ZmNkOWFmp+I60A==: 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.323 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.324 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.581 19:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.515 00:13:15.515 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.515 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.515 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.773 { 00:13:15.773 "cntlid": 141, 00:13:15.773 "qid": 0, 00:13:15.773 "state": "enabled", 00:13:15.773 "thread": "nvmf_tgt_poll_group_000", 00:13:15.773 "listen_address": { 00:13:15.773 "trtype": "TCP", 00:13:15.773 "adrfam": "IPv4", 00:13:15.773 "traddr": "10.0.0.2", 00:13:15.773 "trsvcid": "4420" 00:13:15.773 }, 00:13:15.773 "peer_address": { 00:13:15.773 "trtype": "TCP", 00:13:15.773 "adrfam": "IPv4", 00:13:15.773 "traddr": "10.0.0.1", 00:13:15.773 "trsvcid": "37828" 00:13:15.773 }, 00:13:15.773 "auth": { 00:13:15.773 "state": "completed", 00:13:15.773 "digest": "sha512", 00:13:15.773 "dhgroup": "ffdhe8192" 00:13:15.773 } 00:13:15.773 } 00:13:15.773 ]' 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.773 19:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.031 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:02:OWRkYWY4ODk1NDcyZDQyMmZkYmFlYTFiMDY0ZGZiMDFmZTJjMjFkY2Q2ZjVjOTgw+5Pn/w==: --dhchap-ctrl-secret DHHC-1:01:ZDRkMDE0MDE2ZmU3Yzk1OGExZWQ5NTc2MmU3ZThiNjmMHvHz: 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.963 19:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.963 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.964 19:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.281 19:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.281 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.281 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.923 00:13:17.923 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.923 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.923 19:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.923 { 00:13:17.923 "cntlid": 143, 00:13:17.923 "qid": 0, 00:13:17.923 "state": "enabled", 00:13:17.923 "thread": "nvmf_tgt_poll_group_000", 00:13:17.923 "listen_address": { 00:13:17.923 "trtype": "TCP", 00:13:17.923 "adrfam": "IPv4", 00:13:17.923 "traddr": "10.0.0.2", 00:13:17.923 "trsvcid": "4420" 00:13:17.923 }, 00:13:17.923 "peer_address": { 00:13:17.923 "trtype": "TCP", 00:13:17.923 "adrfam": "IPv4", 00:13:17.923 "traddr": "10.0.0.1", 00:13:17.923 "trsvcid": "37860" 00:13:17.923 }, 00:13:17.923 "auth": { 00:13:17.923 "state": "completed", 00:13:17.923 "digest": "sha512", 00:13:17.923 "dhgroup": "ffdhe8192" 00:13:17.923 } 00:13:17.923 } 00:13:17.923 ]' 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.923 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.181 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.181 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.181 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.181 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.181 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.440 19:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:19.008 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.267 19:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.832 00:13:19.832 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.832 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.832 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.088 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.088 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.088 19:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.088 19:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.089 19:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.089 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.089 { 00:13:20.089 "cntlid": 145, 00:13:20.089 "qid": 0, 00:13:20.089 "state": "enabled", 00:13:20.089 "thread": "nvmf_tgt_poll_group_000", 00:13:20.089 "listen_address": { 00:13:20.089 "trtype": "TCP", 00:13:20.089 "adrfam": "IPv4", 00:13:20.089 "traddr": "10.0.0.2", 00:13:20.089 "trsvcid": "4420" 00:13:20.089 }, 00:13:20.089 "peer_address": { 00:13:20.089 "trtype": "TCP", 00:13:20.089 "adrfam": "IPv4", 00:13:20.089 "traddr": "10.0.0.1", 00:13:20.089 "trsvcid": "37890" 00:13:20.089 }, 00:13:20.089 "auth": { 00:13:20.089 "state": "completed", 00:13:20.089 "digest": "sha512", 00:13:20.089 "dhgroup": "ffdhe8192" 00:13:20.089 } 00:13:20.089 } 00:13:20.089 ]' 00:13:20.089 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.089 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.345 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.602 19:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:00:Y2JmZGExMDViNjdkYmM1NWQ3OGI3OTM0NDQ0YWFjOTljODA3MjU4NmI4MTFlZjll2bSDsg==: --dhchap-ctrl-secret DHHC-1:03:YjcwNDM4ODZhNjNhNjFhZTA1MjYwYjBlYWQ5ODQ4Zjg3NTA2NzQ5MDVkNTJiYzI0MWFkMjc2MTY1MTExMDgxNPH9uNg=: 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:21.168 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:21.734 request: 00:13:21.734 { 00:13:21.734 "name": "nvme0", 00:13:21.734 "trtype": "tcp", 00:13:21.734 "traddr": "10.0.0.2", 00:13:21.734 "adrfam": "ipv4", 00:13:21.734 "trsvcid": "4420", 00:13:21.734 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:21.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:21.734 "prchk_reftag": false, 00:13:21.734 "prchk_guard": false, 00:13:21.734 "hdgst": false, 00:13:21.734 "ddgst": false, 00:13:21.734 "dhchap_key": "key2", 00:13:21.734 "method": "bdev_nvme_attach_controller", 00:13:21.734 "req_id": 1 00:13:21.734 } 00:13:21.734 Got JSON-RPC error response 00:13:21.734 response: 00:13:21.734 { 00:13:21.734 "code": -5, 00:13:21.734 "message": "Input/output error" 00:13:21.734 } 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.734 19:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:21.734 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:22.667 request: 00:13:22.667 { 00:13:22.667 "name": "nvme0", 00:13:22.667 "trtype": "tcp", 00:13:22.667 "traddr": "10.0.0.2", 00:13:22.667 "adrfam": "ipv4", 00:13:22.667 "trsvcid": "4420", 00:13:22.667 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:22.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:22.667 "prchk_reftag": false, 00:13:22.667 "prchk_guard": false, 00:13:22.667 "hdgst": false, 00:13:22.667 "ddgst": false, 00:13:22.667 "dhchap_key": "key1", 00:13:22.667 "dhchap_ctrlr_key": "ckey2", 00:13:22.667 "method": "bdev_nvme_attach_controller", 00:13:22.667 "req_id": 1 00:13:22.667 } 00:13:22.667 Got JSON-RPC error response 00:13:22.667 response: 00:13:22.667 { 00:13:22.667 "code": -5, 00:13:22.667 "message": "Input/output error" 00:13:22.667 } 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:22.667 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key1 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.668 19:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.234 request: 00:13:23.234 { 00:13:23.234 "name": "nvme0", 00:13:23.234 "trtype": "tcp", 00:13:23.234 "traddr": "10.0.0.2", 00:13:23.234 "adrfam": "ipv4", 00:13:23.234 "trsvcid": "4420", 00:13:23.234 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:23.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:23.234 "prchk_reftag": false, 00:13:23.234 "prchk_guard": false, 00:13:23.234 "hdgst": false, 00:13:23.234 "ddgst": false, 00:13:23.234 "dhchap_key": "key1", 00:13:23.234 "dhchap_ctrlr_key": "ckey1", 00:13:23.234 "method": "bdev_nvme_attach_controller", 00:13:23.234 "req_id": 1 00:13:23.234 } 00:13:23.234 Got JSON-RPC error response 00:13:23.234 response: 00:13:23.234 { 00:13:23.234 "code": -5, 00:13:23.234 "message": "Input/output error" 00:13:23.234 } 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69364 00:13:23.234 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69364 ']' 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69364 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69364 00:13:23.235 killing process with pid 69364 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69364' 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69364 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69364 00:13:23.235 19:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72403 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72403 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72403 ']' 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.493 19:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72403 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72403 ']' 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.427 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.684 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.684 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:24.684 19:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:24.684 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.684 19:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.942 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.507 00:13:25.507 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.507 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.507 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.765 { 00:13:25.765 "cntlid": 1, 00:13:25.765 "qid": 0, 00:13:25.765 "state": "enabled", 00:13:25.765 "thread": "nvmf_tgt_poll_group_000", 00:13:25.765 "listen_address": { 00:13:25.765 "trtype": "TCP", 00:13:25.765 "adrfam": "IPv4", 00:13:25.765 "traddr": "10.0.0.2", 00:13:25.765 "trsvcid": "4420" 00:13:25.765 }, 00:13:25.765 "peer_address": { 00:13:25.765 "trtype": "TCP", 00:13:25.765 "adrfam": "IPv4", 00:13:25.765 "traddr": "10.0.0.1", 00:13:25.765 "trsvcid": "39334" 00:13:25.765 }, 00:13:25.765 "auth": { 00:13:25.765 "state": "completed", 00:13:25.765 "digest": "sha512", 00:13:25.765 "dhgroup": "ffdhe8192" 00:13:25.765 } 00:13:25.765 } 00:13:25.765 ]' 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.765 19:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.765 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.765 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.023 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.023 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.023 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.279 19:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid 1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-secret DHHC-1:03:Yzk5ZmFhYzNiNjgyMTNlZmZiMjdlYmI5NGMwNDc0MTNmNTMxNGFkYjVlZDFiYTViNzQ3MGViNjA1NzExM2E3NyN4/NI=: 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --dhchap-key key3 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:26.845 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.103 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.668 request: 00:13:27.668 { 00:13:27.668 "name": "nvme0", 00:13:27.668 "trtype": "tcp", 00:13:27.668 "traddr": "10.0.0.2", 00:13:27.668 "adrfam": "ipv4", 00:13:27.668 "trsvcid": "4420", 00:13:27.668 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:27.668 "prchk_reftag": false, 00:13:27.668 "prchk_guard": false, 00:13:27.668 "hdgst": false, 00:13:27.668 "ddgst": false, 00:13:27.668 "dhchap_key": "key3", 00:13:27.668 "method": "bdev_nvme_attach_controller", 00:13:27.668 "req_id": 1 00:13:27.668 } 00:13:27.668 Got JSON-RPC error response 00:13:27.668 response: 00:13:27.668 { 00:13:27.668 "code": -5, 00:13:27.668 "message": "Input/output error" 00:13:27.668 } 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.668 19:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.925 request: 00:13:27.925 { 00:13:27.925 "name": "nvme0", 00:13:27.925 "trtype": "tcp", 00:13:27.925 "traddr": "10.0.0.2", 00:13:27.925 "adrfam": "ipv4", 00:13:27.925 "trsvcid": "4420", 00:13:27.925 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:27.925 "prchk_reftag": false, 00:13:27.925 "prchk_guard": false, 00:13:27.925 "hdgst": false, 00:13:27.925 "ddgst": false, 00:13:27.925 "dhchap_key": "key3", 00:13:27.925 "method": "bdev_nvme_attach_controller", 00:13:27.925 "req_id": 1 00:13:27.925 } 00:13:27.925 Got JSON-RPC error response 00:13:27.925 response: 00:13:27.925 { 00:13:27.925 "code": -5, 00:13:27.925 "message": "Input/output error" 00:13:27.925 } 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:27.925 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:28.183 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:28.183 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.183 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:28.440 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:28.440 request: 00:13:28.440 { 00:13:28.440 "name": "nvme0", 00:13:28.440 "trtype": "tcp", 00:13:28.440 "traddr": "10.0.0.2", 00:13:28.440 "adrfam": "ipv4", 00:13:28.440 "trsvcid": "4420", 00:13:28.440 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:28.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff", 00:13:28.440 "prchk_reftag": false, 00:13:28.440 "prchk_guard": false, 00:13:28.440 "hdgst": false, 00:13:28.440 "ddgst": false, 00:13:28.440 "dhchap_key": "key0", 00:13:28.440 "dhchap_ctrlr_key": "key1", 00:13:28.440 "method": "bdev_nvme_attach_controller", 00:13:28.440 "req_id": 1 00:13:28.440 } 00:13:28.440 Got JSON-RPC error response 00:13:28.440 response: 00:13:28.440 { 00:13:28.440 "code": -5, 00:13:28.440 "message": "Input/output error" 00:13:28.440 } 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:28.698 19:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:28.956 00:13:28.956 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:28.956 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.956 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:29.213 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.213 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.213 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69396 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69396 ']' 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69396 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69396 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69396' 00:13:29.471 killing process with pid 69396 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69396 00:13:29.471 19:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69396 00:13:29.729 19:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:29.729 19:03:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.729 19:03:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:29.729 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.729 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:29.729 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.729 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.986 rmmod nvme_tcp 00:13:29.986 rmmod nvme_fabrics 00:13:29.986 rmmod nvme_keyring 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72403 ']' 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72403 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72403 ']' 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72403 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72403 00:13:29.986 killing process with pid 72403 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72403' 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72403 00:13:29.986 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72403 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5ip /tmp/spdk.key-sha256.Lx1 /tmp/spdk.key-sha384.t0w /tmp/spdk.key-sha512.NsQ /tmp/spdk.key-sha512.qDB /tmp/spdk.key-sha384.smS /tmp/spdk.key-sha256.O4q '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:30.244 ************************************ 00:13:30.244 END TEST nvmf_auth_target 00:13:30.244 ************************************ 00:13:30.244 00:13:30.244 real 2m50.119s 00:13:30.244 user 6m47.588s 00:13:30.244 sys 0m26.588s 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.244 19:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 19:03:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.244 19:03:57 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:30.244 19:03:57 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:30.244 19:03:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:30.244 19:03:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.244 19:03:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 ************************************ 00:13:30.244 START TEST nvmf_bdevio_no_huge 00:13:30.244 ************************************ 00:13:30.244 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:30.244 * Looking for test storage... 00:13:30.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.244 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:30.245 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:30.503 Cannot find device "nvmf_tgt_br" 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.503 Cannot find device "nvmf_tgt_br2" 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:30.503 Cannot find device "nvmf_tgt_br" 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:30.503 Cannot find device "nvmf_tgt_br2" 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:30.503 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:30.504 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:30.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:30.773 00:13:30.773 --- 10.0.0.2 ping statistics --- 00:13:30.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.773 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:30.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:30.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:13:30.773 00:13:30.773 --- 10.0.0.3 ping statistics --- 00:13:30.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.773 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:30.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:30.773 00:13:30.773 --- 10.0.0.1 ping statistics --- 00:13:30.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.773 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.773 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72725 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72725 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72725 ']' 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.774 19:03:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.774 [2024-07-15 19:03:57.930155] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:30.774 [2024-07-15 19:03:57.930261] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:31.031 [2024-07-15 19:03:58.080658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.031 [2024-07-15 19:03:58.200631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.031 [2024-07-15 19:03:58.200687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.031 [2024-07-15 19:03:58.200699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.031 [2024-07-15 19:03:58.200708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.031 [2024-07-15 19:03:58.200715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.031 [2024-07-15 19:03:58.200881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:31.031 [2024-07-15 19:03:58.201034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:31.031 [2024-07-15 19:03:58.201304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:31.031 [2024-07-15 19:03:58.201387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.032 [2024-07-15 19:03:58.205764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.964 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 [2024-07-15 19:03:58.958838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 Malloc0 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.965 19:03:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 [2024-07-15 19:03:58.998956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:31.965 { 00:13:31.965 "params": { 00:13:31.965 "name": "Nvme$subsystem", 00:13:31.965 "trtype": "$TEST_TRANSPORT", 00:13:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:31.965 "adrfam": "ipv4", 00:13:31.965 "trsvcid": "$NVMF_PORT", 00:13:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:31.965 "hdgst": ${hdgst:-false}, 00:13:31.965 "ddgst": ${ddgst:-false} 00:13:31.965 }, 00:13:31.965 "method": "bdev_nvme_attach_controller" 00:13:31.965 } 00:13:31.965 EOF 00:13:31.965 )") 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:31.965 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:31.965 "params": { 00:13:31.965 "name": "Nvme1", 00:13:31.965 "trtype": "tcp", 00:13:31.965 "traddr": "10.0.0.2", 00:13:31.965 "adrfam": "ipv4", 00:13:31.965 "trsvcid": "4420", 00:13:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.965 "hdgst": false, 00:13:31.965 "ddgst": false 00:13:31.965 }, 00:13:31.965 "method": "bdev_nvme_attach_controller" 00:13:31.965 }' 00:13:31.965 [2024-07-15 19:03:59.046032] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:31.965 [2024-07-15 19:03:59.046534] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72767 ] 00:13:31.965 [2024-07-15 19:03:59.182363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.225 [2024-07-15 19:03:59.300294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.225 [2024-07-15 19:03:59.300395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.225 [2024-07-15 19:03:59.300570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.225 [2024-07-15 19:03:59.313065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:32.225 I/O targets: 00:13:32.225 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:32.225 00:13:32.225 00:13:32.225 CUnit - A unit testing framework for C - Version 2.1-3 00:13:32.225 http://cunit.sourceforge.net/ 00:13:32.225 00:13:32.225 00:13:32.225 Suite: bdevio tests on: Nvme1n1 00:13:32.225 Test: blockdev write read block ...passed 00:13:32.225 Test: blockdev write zeroes read block ...passed 00:13:32.225 Test: blockdev write zeroes read no split ...passed 00:13:32.225 Test: blockdev write zeroes read split ...passed 00:13:32.225 Test: blockdev write zeroes read split partial ...passed 00:13:32.225 Test: blockdev reset ...[2024-07-15 19:03:59.500179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:32.225 [2024-07-15 19:03:59.500453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a11a10 (9): Bad file descriptor 00:13:32.491 [2024-07-15 19:03:59.520422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:32.491 passed 00:13:32.491 Test: blockdev write read 8 blocks ...passed 00:13:32.491 Test: blockdev write read size > 128k ...passed 00:13:32.491 Test: blockdev write read invalid size ...passed 00:13:32.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.491 Test: blockdev write read max offset ...passed 00:13:32.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.491 Test: blockdev writev readv 8 blocks ...passed 00:13:32.491 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.491 Test: blockdev writev readv block ...passed 00:13:32.491 Test: blockdev writev readv size > 128k ...passed 00:13:32.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.491 Test: blockdev comparev and writev ...[2024-07-15 19:03:59.529311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.529474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.529678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.530302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.530454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.530632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.531283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.531307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.531325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.531335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.531624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.531646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.531663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:32.491 [2024-07-15 19:03:59.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:32.491 passed 00:13:32.491 Test: blockdev nvme passthru rw ...passed 00:13:32.491 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:03:59.532496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.491 [2024-07-15 19:03:59.532533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.532639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.491 [2024-07-15 19:03:59.532659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:32.491 [2024-07-15 19:03:59.532761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.491 [2024-07-15 19:03:59.532788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:32.491 passed 00:13:32.491 Test: blockdev nvme admin passthru ...[2024-07-15 19:03:59.532893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:32.491 [2024-07-15 19:03:59.532919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:32.491 passed 00:13:32.491 Test: blockdev copy ...passed 00:13:32.491 00:13:32.491 Run Summary: Type Total Ran Passed Failed Inactive 00:13:32.491 suites 1 1 n/a 0 0 00:13:32.491 tests 23 23 23 0 0 00:13:32.491 asserts 152 152 152 0 n/a 00:13:32.491 00:13:32.491 Elapsed time = 0.174 seconds 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.749 rmmod nvme_tcp 00:13:32.749 rmmod nvme_fabrics 00:13:32.749 rmmod nvme_keyring 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72725 ']' 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72725 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72725 ']' 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72725 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.749 19:03:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72725 00:13:32.749 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:32.749 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:32.749 killing process with pid 72725 00:13:32.749 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72725' 00:13:32.749 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72725 00:13:32.749 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72725 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:33.316 00:13:33.316 real 0m3.031s 00:13:33.316 user 0m9.940s 00:13:33.316 sys 0m1.158s 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.316 19:04:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:33.316 ************************************ 00:13:33.316 END TEST nvmf_bdevio_no_huge 00:13:33.316 ************************************ 00:13:33.316 19:04:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:33.316 19:04:00 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:33.316 19:04:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.316 19:04:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.316 19:04:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.316 ************************************ 00:13:33.316 START TEST nvmf_tls 00:13:33.316 ************************************ 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:33.316 * Looking for test storage... 00:13:33.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:33.316 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:33.574 Cannot find device "nvmf_tgt_br" 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.574 Cannot find device "nvmf_tgt_br2" 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:33.574 Cannot find device "nvmf_tgt_br" 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:33.574 Cannot find device "nvmf_tgt_br2" 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:33.574 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:33.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:13:33.832 00:13:33.832 --- 10.0.0.2 ping statistics --- 00:13:33.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.832 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:33.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:33.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:33.832 00:13:33.832 --- 10.0.0.3 ping statistics --- 00:13:33.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.832 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:33.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:33.832 00:13:33.832 --- 10.0.0.1 ping statistics --- 00:13:33.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.832 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72936 00:13:33.832 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72936 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72936 ']' 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.833 19:04:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.833 [2024-07-15 19:04:00.986318] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:33.833 [2024-07-15 19:04:00.986431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.090 [2024-07-15 19:04:01.124210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.090 [2024-07-15 19:04:01.242694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.090 [2024-07-15 19:04:01.242755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.090 [2024-07-15 19:04:01.242769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.090 [2024-07-15 19:04:01.242779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.090 [2024-07-15 19:04:01.242788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.090 [2024-07-15 19:04:01.242825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:34.656 19:04:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:34.915 true 00:13:35.172 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.172 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:35.430 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:35.430 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:35.430 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:35.430 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:35.430 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.686 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:35.686 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:35.686 19:04:02 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:35.943 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.944 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:36.207 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:36.207 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:36.207 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:36.207 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:36.470 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:36.470 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:36.470 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:36.728 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:36.728 19:04:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:37.044 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:37.044 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:37.044 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:37.302 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:37.302 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:37.560 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.nh6uNRQpfo 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dLiXOV8fjv 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.nh6uNRQpfo 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dLiXOV8fjv 00:13:37.561 19:04:04 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:37.817 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:38.382 [2024-07-15 19:04:05.412440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.382 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.nh6uNRQpfo 00:13:38.382 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nh6uNRQpfo 00:13:38.382 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:38.641 [2024-07-15 19:04:05.700273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.641 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.900 19:04:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:38.900 [2024-07-15 19:04:06.172439] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.900 [2024-07-15 19:04:06.172697] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.157 19:04:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:39.157 malloc0 00:13:39.157 19:04:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:39.417 19:04:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nh6uNRQpfo 00:13:39.676 [2024-07-15 19:04:06.883596] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:39.676 19:04:06 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.nh6uNRQpfo 00:13:51.878 Initializing NVMe Controllers 00:13:51.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.878 Initialization complete. Launching workers. 00:13:51.879 ======================================================== 00:13:51.879 Latency(us) 00:13:51.879 Device Information : IOPS MiB/s Average min max 00:13:51.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9416.79 36.78 6797.99 1678.91 8864.03 00:13:51.879 ======================================================== 00:13:51.879 Total : 9416.79 36.78 6797.99 1678.91 8864.03 00:13:51.879 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nh6uNRQpfo 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nh6uNRQpfo' 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73176 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73176 /var/tmp/bdevperf.sock 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73176 ']' 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.879 19:04:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:51.879 [2024-07-15 19:04:17.145171] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:13:51.879 [2024-07-15 19:04:17.145252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73176 ] 00:13:51.879 [2024-07-15 19:04:17.276328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.879 [2024-07-15 19:04:17.395809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.879 [2024-07-15 19:04:17.449958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.879 19:04:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.879 19:04:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:51.879 19:04:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nh6uNRQpfo 00:13:51.879 [2024-07-15 19:04:18.305014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.879 [2024-07-15 19:04:18.305166] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.879 TLSTESTn1 00:13:51.879 19:04:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:51.879 Running I/O for 10 seconds... 00:14:01.851 00:14:01.851 Latency(us) 00:14:01.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:01.851 Verification LBA range: start 0x0 length 0x2000 00:14:01.851 TLSTESTn1 : 10.03 3603.48 14.08 0.00 0.00 35429.62 6017.40 26571.87 00:14:01.851 =================================================================================================================== 00:14:01.851 Total : 3603.48 14.08 0.00 0.00 35429.62 6017.40 26571.87 00:14:01.851 0 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73176 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73176 ']' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73176 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73176 00:14:01.851 killing process with pid 73176 00:14:01.851 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.851 00:14:01.851 Latency(us) 00:14:01.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.851 =================================================================================================================== 00:14:01.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73176' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73176 00:14:01.851 [2024-07-15 19:04:28.590476] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73176 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dLiXOV8fjv 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dLiXOV8fjv 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:01.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dLiXOV8fjv 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dLiXOV8fjv' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73311 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73311 /var/tmp/bdevperf.sock 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73311 ']' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.851 19:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.851 [2024-07-15 19:04:28.870106] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:01.851 [2024-07-15 19:04:28.870199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:14:01.851 [2024-07-15 19:04:29.002773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.851 [2024-07-15 19:04:29.121687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.110 [2024-07-15 19:04:29.175902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.676 19:04:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.676 19:04:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:02.676 19:04:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dLiXOV8fjv 00:14:02.935 [2024-07-15 19:04:30.073103] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.935 [2024-07-15 19:04:30.073247] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:02.935 [2024-07-15 19:04:30.078354] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:02.935 [2024-07-15 19:04:30.078827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb3d0 (107): Transport endpoint is not connected 00:14:02.935 [2024-07-15 19:04:30.079805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23eb3d0 (9): Bad file descriptor 00:14:02.935 [2024-07-15 19:04:30.080801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:02.935 [2024-07-15 19:04:30.080833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:02.935 [2024-07-15 19:04:30.080848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:02.935 request: 00:14:02.935 { 00:14:02.935 "name": "TLSTEST", 00:14:02.935 "trtype": "tcp", 00:14:02.935 "traddr": "10.0.0.2", 00:14:02.935 "adrfam": "ipv4", 00:14:02.935 "trsvcid": "4420", 00:14:02.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.935 "prchk_reftag": false, 00:14:02.935 "prchk_guard": false, 00:14:02.935 "hdgst": false, 00:14:02.935 "ddgst": false, 00:14:02.935 "psk": "/tmp/tmp.dLiXOV8fjv", 00:14:02.935 "method": "bdev_nvme_attach_controller", 00:14:02.935 "req_id": 1 00:14:02.935 } 00:14:02.935 Got JSON-RPC error response 00:14:02.935 response: 00:14:02.935 { 00:14:02.935 "code": -5, 00:14:02.935 "message": "Input/output error" 00:14:02.935 } 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73311 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73311 ']' 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73311 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73311 00:14:02.935 killing process with pid 73311 00:14:02.935 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.935 00:14:02.935 Latency(us) 00:14:02.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.935 =================================================================================================================== 00:14:02.935 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73311' 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73311 00:14:02.935 [2024-07-15 19:04:30.122531] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:02.935 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73311 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nh6uNRQpfo 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nh6uNRQpfo 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.nh6uNRQpfo 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nh6uNRQpfo' 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73333 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73333 /var/tmp/bdevperf.sock 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73333 ']' 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.193 19:04:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.193 [2024-07-15 19:04:30.405977] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:03.193 [2024-07-15 19:04:30.406077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73333 ] 00:14:03.452 [2024-07-15 19:04:30.547782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.452 [2024-07-15 19:04:30.667816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.452 [2024-07-15 19:04:30.720963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.384 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.384 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:04.384 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.nh6uNRQpfo 00:14:04.384 [2024-07-15 19:04:31.612496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.384 [2024-07-15 19:04:31.612640] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:04.384 [2024-07-15 19:04:31.617515] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:04.384 [2024-07-15 19:04:31.617585] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:04.384 [2024-07-15 19:04:31.617713] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:04.384 [2024-07-15 19:04:31.618198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9d3d0 (107): Transport endpoint is not connected 00:14:04.384 [2024-07-15 19:04:31.619178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9d3d0 (9): Bad file descriptor 00:14:04.384 [2024-07-15 19:04:31.620174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:04.384 [2024-07-15 19:04:31.620195] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:04.384 [2024-07-15 19:04:31.620209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:04.384 request: 00:14:04.385 { 00:14:04.385 "name": "TLSTEST", 00:14:04.385 "trtype": "tcp", 00:14:04.385 "traddr": "10.0.0.2", 00:14:04.385 "adrfam": "ipv4", 00:14:04.385 "trsvcid": "4420", 00:14:04.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:04.385 "prchk_reftag": false, 00:14:04.385 "prchk_guard": false, 00:14:04.385 "hdgst": false, 00:14:04.385 "ddgst": false, 00:14:04.385 "psk": "/tmp/tmp.nh6uNRQpfo", 00:14:04.385 "method": "bdev_nvme_attach_controller", 00:14:04.385 "req_id": 1 00:14:04.385 } 00:14:04.385 Got JSON-RPC error response 00:14:04.385 response: 00:14:04.385 { 00:14:04.385 "code": -5, 00:14:04.385 "message": "Input/output error" 00:14:04.385 } 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73333 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73333 ']' 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73333 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73333 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:04.385 killing process with pid 73333 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73333' 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73333 00:14:04.385 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.385 00:14:04.385 Latency(us) 00:14:04.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.385 =================================================================================================================== 00:14:04.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:04.385 [2024-07-15 19:04:31.659653] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:04.385 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73333 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nh6uNRQpfo 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nh6uNRQpfo 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.nh6uNRQpfo 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nh6uNRQpfo' 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73362 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73362 /var/tmp/bdevperf.sock 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73362 ']' 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.643 19:04:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.901 [2024-07-15 19:04:31.944930] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:04.901 [2024-07-15 19:04:31.945036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73362 ] 00:14:04.901 [2024-07-15 19:04:32.089922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.158 [2024-07-15 19:04:32.209126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.158 [2024-07-15 19:04:32.262789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.722 19:04:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.722 19:04:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:05.722 19:04:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nh6uNRQpfo 00:14:05.981 [2024-07-15 19:04:33.224587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.981 [2024-07-15 19:04:33.225182] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:05.981 [2024-07-15 19:04:33.236485] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:05.981 [2024-07-15 19:04:33.236889] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:05.981 [2024-07-15 19:04:33.237037] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:05.981 [2024-07-15 19:04:33.237920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9953d0 (107): Transport endpoint is not connected 00:14:05.981 [2024-07-15 19:04:33.238908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9953d0 (9): Bad file descriptor 00:14:05.981 [2024-07-15 19:04:33.239904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:05.981 [2024-07-15 19:04:33.240019] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:05.981 [2024-07-15 19:04:33.240093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:05.981 request: 00:14:05.981 { 00:14:05.981 "name": "TLSTEST", 00:14:05.981 "trtype": "tcp", 00:14:05.981 "traddr": "10.0.0.2", 00:14:05.981 "adrfam": "ipv4", 00:14:05.981 "trsvcid": "4420", 00:14:05.981 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:05.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.981 "prchk_reftag": false, 00:14:05.981 "prchk_guard": false, 00:14:05.981 "hdgst": false, 00:14:05.981 "ddgst": false, 00:14:05.981 "psk": "/tmp/tmp.nh6uNRQpfo", 00:14:05.981 "method": "bdev_nvme_attach_controller", 00:14:05.981 "req_id": 1 00:14:05.981 } 00:14:05.981 Got JSON-RPC error response 00:14:05.981 response: 00:14:05.981 { 00:14:05.981 "code": -5, 00:14:05.981 "message": "Input/output error" 00:14:05.981 } 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73362 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73362 ']' 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73362 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.981 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73362 00:14:06.239 killing process with pid 73362 00:14:06.239 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.239 00:14:06.239 Latency(us) 00:14:06.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.239 =================================================================================================================== 00:14:06.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73362' 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73362 00:14:06.239 [2024-07-15 19:04:33.282454] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73362 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:06.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:06.239 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73389 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73389 /var/tmp/bdevperf.sock 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73389 ']' 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.240 19:04:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 [2024-07-15 19:04:33.542649] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:06.499 [2024-07-15 19:04:33.542751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73389 ] 00:14:06.499 [2024-07-15 19:04:33.673937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.756 [2024-07-15 19:04:33.827414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.756 [2024-07-15 19:04:33.882850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.324 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.324 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:07.324 19:04:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:07.582 [2024-07-15 19:04:34.736933] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:07.582 [2024-07-15 19:04:34.739163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a12da0 (9): Bad file descriptor 00:14:07.582 [2024-07-15 19:04:34.740158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:07.582 [2024-07-15 19:04:34.740195] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:07.582 [2024-07-15 19:04:34.740224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:07.582 request: 00:14:07.582 { 00:14:07.582 "name": "TLSTEST", 00:14:07.582 "trtype": "tcp", 00:14:07.582 "traddr": "10.0.0.2", 00:14:07.582 "adrfam": "ipv4", 00:14:07.582 "trsvcid": "4420", 00:14:07.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.582 "prchk_reftag": false, 00:14:07.582 "prchk_guard": false, 00:14:07.582 "hdgst": false, 00:14:07.582 "ddgst": false, 00:14:07.582 "method": "bdev_nvme_attach_controller", 00:14:07.582 "req_id": 1 00:14:07.582 } 00:14:07.582 Got JSON-RPC error response 00:14:07.582 response: 00:14:07.582 { 00:14:07.582 "code": -5, 00:14:07.582 "message": "Input/output error" 00:14:07.582 } 00:14:07.582 19:04:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73389 00:14:07.582 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73389 ']' 00:14:07.582 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73389 00:14:07.582 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.582 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73389 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:07.583 killing process with pid 73389 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73389' 00:14:07.583 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.583 00:14:07.583 Latency(us) 00:14:07.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.583 =================================================================================================================== 00:14:07.583 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73389 00:14:07.583 19:04:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73389 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72936 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72936 ']' 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72936 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72936 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.841 killing process with pid 72936 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72936' 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72936 00:14:07.841 [2024-07-15 19:04:35.039569] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:07.841 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72936 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.PIZi588o2F 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.PIZi588o2F 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73428 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73428 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73428 ']' 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.100 19:04:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.359 [2024-07-15 19:04:35.389794] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:08.359 [2024-07-15 19:04:35.389867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.359 [2024-07-15 19:04:35.526261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.359 [2024-07-15 19:04:35.639461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.359 [2024-07-15 19:04:35.639532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.359 [2024-07-15 19:04:35.639544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.359 [2024-07-15 19:04:35.639552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.359 [2024-07-15 19:04:35.639558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.359 [2024-07-15 19:04:35.639582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.618 [2024-07-15 19:04:35.693500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PIZi588o2F 00:14:09.184 19:04:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:09.443 [2024-07-15 19:04:36.644964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.443 19:04:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:09.818 19:04:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:10.077 [2024-07-15 19:04:37.157065] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.077 [2024-07-15 19:04:37.157264] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.077 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:10.339 malloc0 00:14:10.339 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:10.598 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:10.857 [2024-07-15 19:04:37.899904] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PIZi588o2F 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PIZi588o2F' 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73487 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73487 /var/tmp/bdevperf.sock 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73487 ']' 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.857 19:04:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 [2024-07-15 19:04:37.974337] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:10.857 [2024-07-15 19:04:37.974437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73487 ] 00:14:10.857 [2024-07-15 19:04:38.116040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.114 [2024-07-15 19:04:38.228403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.114 [2024-07-15 19:04:38.282067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.681 19:04:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.681 19:04:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:11.681 19:04:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:11.959 [2024-07-15 19:04:39.066259] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.959 [2024-07-15 19:04:39.066413] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:11.959 TLSTESTn1 00:14:11.959 19:04:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:12.218 Running I/O for 10 seconds... 00:14:22.210 00:14:22.210 Latency(us) 00:14:22.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.210 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:22.210 Verification LBA range: start 0x0 length 0x2000 00:14:22.210 TLSTESTn1 : 10.02 4048.21 15.81 0.00 0.00 31559.34 5868.45 23592.96 00:14:22.210 =================================================================================================================== 00:14:22.210 Total : 4048.21 15.81 0.00 0.00 31559.34 5868.45 23592.96 00:14:22.210 0 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73487 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73487 ']' 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73487 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73487 00:14:22.210 killing process with pid 73487 00:14:22.210 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.210 00:14:22.210 Latency(us) 00:14:22.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.210 =================================================================================================================== 00:14:22.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73487' 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73487 00:14:22.210 [2024-07-15 19:04:49.314586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.210 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73487 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.PIZi588o2F 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PIZi588o2F 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PIZi588o2F 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:22.469 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PIZi588o2F 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PIZi588o2F' 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73616 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73616 /var/tmp/bdevperf.sock 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73616 ']' 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.470 19:04:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.470 [2024-07-15 19:04:49.596486] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:22.470 [2024-07-15 19:04:49.596719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73616 ] 00:14:22.470 [2024-07-15 19:04:49.728134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.728 [2024-07-15 19:04:49.832066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.728 [2024-07-15 19:04:49.888281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:23.662 [2024-07-15 19:04:50.870017] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.662 [2024-07-15 19:04:50.870358] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:23.662 [2024-07-15 19:04:50.870477] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.PIZi588o2F 00:14:23.662 request: 00:14:23.662 { 00:14:23.662 "name": "TLSTEST", 00:14:23.662 "trtype": "tcp", 00:14:23.662 "traddr": "10.0.0.2", 00:14:23.662 "adrfam": "ipv4", 00:14:23.662 "trsvcid": "4420", 00:14:23.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.662 "prchk_reftag": false, 00:14:23.662 "prchk_guard": false, 00:14:23.662 "hdgst": false, 00:14:23.662 "ddgst": false, 00:14:23.662 "psk": "/tmp/tmp.PIZi588o2F", 00:14:23.662 "method": "bdev_nvme_attach_controller", 00:14:23.662 "req_id": 1 00:14:23.662 } 00:14:23.662 Got JSON-RPC error response 00:14:23.662 response: 00:14:23.662 { 00:14:23.662 "code": -1, 00:14:23.662 "message": "Operation not permitted" 00:14:23.662 } 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73616 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73616 ']' 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73616 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73616 00:14:23.662 killing process with pid 73616 00:14:23.662 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.662 00:14:23.662 Latency(us) 00:14:23.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.662 =================================================================================================================== 00:14:23.662 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73616' 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73616 00:14:23.662 19:04:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73616 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73428 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73428 ']' 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73428 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73428 00:14:23.921 killing process with pid 73428 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73428' 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73428 00:14:23.921 [2024-07-15 19:04:51.163839] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:23.921 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73428 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73649 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73649 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73649 ']' 00:14:24.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.276 19:04:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.276 [2024-07-15 19:04:51.446968] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:24.276 [2024-07-15 19:04:51.447045] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.535 [2024-07-15 19:04:51.585263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.535 [2024-07-15 19:04:51.689807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.535 [2024-07-15 19:04:51.689861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.535 [2024-07-15 19:04:51.689888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.535 [2024-07-15 19:04:51.689896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.535 [2024-07-15 19:04:51.689903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.535 [2024-07-15 19:04:51.689933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.535 [2024-07-15 19:04:51.746612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:25.102 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.102 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:25.102 19:04:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.102 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.102 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PIZi588o2F 00:14:25.360 19:04:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.619 [2024-07-15 19:04:52.653740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.619 19:04:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.878 19:04:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:25.878 [2024-07-15 19:04:53.149831] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.878 [2024-07-15 19:04:53.150077] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.143 19:04:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:26.143 malloc0 00:14:26.143 19:04:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.406 19:04:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:26.665 [2024-07-15 19:04:53.817132] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:26.665 [2024-07-15 19:04:53.817173] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:26.665 [2024-07-15 19:04:53.817222] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:26.665 request: 00:14:26.665 { 00:14:26.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.665 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.665 "psk": "/tmp/tmp.PIZi588o2F", 00:14:26.665 "method": "nvmf_subsystem_add_host", 00:14:26.665 "req_id": 1 00:14:26.665 } 00:14:26.665 Got JSON-RPC error response 00:14:26.665 response: 00:14:26.665 { 00:14:26.665 "code": -32603, 00:14:26.665 "message": "Internal error" 00:14:26.665 } 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73649 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73649 ']' 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73649 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73649 00:14:26.665 killing process with pid 73649 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73649' 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73649 00:14:26.665 19:04:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73649 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.PIZi588o2F 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73711 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73711 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73711 ']' 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.924 19:04:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.924 [2024-07-15 19:04:54.161466] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:26.924 [2024-07-15 19:04:54.161568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.183 [2024-07-15 19:04:54.301278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.183 [2024-07-15 19:04:54.401800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.183 [2024-07-15 19:04:54.401852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.183 [2024-07-15 19:04:54.401879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.183 [2024-07-15 19:04:54.401905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.183 [2024-07-15 19:04:54.401912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.183 [2024-07-15 19:04:54.401937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.183 [2024-07-15 19:04:54.455890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PIZi588o2F 00:14:28.118 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:28.118 [2024-07-15 19:04:55.395965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.377 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:28.377 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:28.637 [2024-07-15 19:04:55.884074] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.637 [2024-07-15 19:04:55.884280] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.637 19:04:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:28.895 malloc0 00:14:28.895 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:29.153 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:29.411 [2024-07-15 19:04:56.632119] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:29.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73766 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73766 /var/tmp/bdevperf.sock 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73766 ']' 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.411 19:04:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.411 [2024-07-15 19:04:56.698741] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:29.411 [2024-07-15 19:04:56.699015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73766 ] 00:14:29.669 [2024-07-15 19:04:56.839640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.669 [2024-07-15 19:04:56.949982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.927 [2024-07-15 19:04:57.007032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.494 19:04:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.494 19:04:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:30.494 19:04:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:30.753 [2024-07-15 19:04:57.894446] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:30.753 [2024-07-15 19:04:57.894601] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:30.754 TLSTESTn1 00:14:30.754 19:04:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:31.322 19:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:31.322 "subsystems": [ 00:14:31.322 { 00:14:31.322 "subsystem": "keyring", 00:14:31.322 "config": [] 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "subsystem": "iobuf", 00:14:31.322 "config": [ 00:14:31.322 { 00:14:31.322 "method": "iobuf_set_options", 00:14:31.322 "params": { 00:14:31.322 "small_pool_count": 8192, 00:14:31.322 "large_pool_count": 1024, 00:14:31.322 "small_bufsize": 8192, 00:14:31.322 "large_bufsize": 135168 00:14:31.322 } 00:14:31.322 } 00:14:31.322 ] 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "subsystem": "sock", 00:14:31.322 "config": [ 00:14:31.322 { 00:14:31.322 "method": "sock_set_default_impl", 00:14:31.322 "params": { 00:14:31.322 "impl_name": "uring" 00:14:31.322 } 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "method": "sock_impl_set_options", 00:14:31.322 "params": { 00:14:31.322 "impl_name": "ssl", 00:14:31.322 "recv_buf_size": 4096, 00:14:31.322 "send_buf_size": 4096, 00:14:31.322 "enable_recv_pipe": true, 00:14:31.322 "enable_quickack": false, 00:14:31.322 "enable_placement_id": 0, 00:14:31.322 "enable_zerocopy_send_server": true, 00:14:31.322 "enable_zerocopy_send_client": false, 00:14:31.322 "zerocopy_threshold": 0, 00:14:31.322 "tls_version": 0, 00:14:31.322 "enable_ktls": false 00:14:31.322 } 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "method": "sock_impl_set_options", 00:14:31.322 "params": { 00:14:31.322 "impl_name": "posix", 00:14:31.322 "recv_buf_size": 2097152, 00:14:31.322 "send_buf_size": 2097152, 00:14:31.322 "enable_recv_pipe": true, 00:14:31.322 "enable_quickack": false, 00:14:31.322 "enable_placement_id": 0, 00:14:31.322 "enable_zerocopy_send_server": true, 00:14:31.322 "enable_zerocopy_send_client": false, 00:14:31.322 "zerocopy_threshold": 0, 00:14:31.322 "tls_version": 0, 00:14:31.322 "enable_ktls": false 00:14:31.322 } 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "method": "sock_impl_set_options", 00:14:31.322 "params": { 00:14:31.322 "impl_name": "uring", 00:14:31.322 "recv_buf_size": 2097152, 00:14:31.322 "send_buf_size": 2097152, 00:14:31.322 "enable_recv_pipe": true, 00:14:31.322 "enable_quickack": false, 00:14:31.322 "enable_placement_id": 0, 00:14:31.322 "enable_zerocopy_send_server": false, 00:14:31.322 "enable_zerocopy_send_client": false, 00:14:31.322 "zerocopy_threshold": 0, 00:14:31.322 "tls_version": 0, 00:14:31.322 "enable_ktls": false 00:14:31.322 } 00:14:31.322 } 00:14:31.322 ] 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "subsystem": "vmd", 00:14:31.322 "config": [] 00:14:31.322 }, 00:14:31.322 { 00:14:31.322 "subsystem": "accel", 00:14:31.322 "config": [ 00:14:31.322 { 00:14:31.322 "method": "accel_set_options", 00:14:31.322 "params": { 00:14:31.322 "small_cache_size": 128, 00:14:31.322 "large_cache_size": 16, 00:14:31.322 "task_count": 2048, 00:14:31.322 "sequence_count": 2048, 00:14:31.323 "buf_count": 2048 00:14:31.323 } 00:14:31.323 } 00:14:31.323 ] 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "subsystem": "bdev", 00:14:31.323 "config": [ 00:14:31.323 { 00:14:31.323 "method": "bdev_set_options", 00:14:31.323 "params": { 00:14:31.323 "bdev_io_pool_size": 65535, 00:14:31.323 "bdev_io_cache_size": 256, 00:14:31.323 "bdev_auto_examine": true, 00:14:31.323 "iobuf_small_cache_size": 128, 00:14:31.323 "iobuf_large_cache_size": 16 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_raid_set_options", 00:14:31.323 "params": { 00:14:31.323 "process_window_size_kb": 1024 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_iscsi_set_options", 00:14:31.323 "params": { 00:14:31.323 "timeout_sec": 30 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_nvme_set_options", 00:14:31.323 "params": { 00:14:31.323 "action_on_timeout": "none", 00:14:31.323 "timeout_us": 0, 00:14:31.323 "timeout_admin_us": 0, 00:14:31.323 "keep_alive_timeout_ms": 10000, 00:14:31.323 "arbitration_burst": 0, 00:14:31.323 "low_priority_weight": 0, 00:14:31.323 "medium_priority_weight": 0, 00:14:31.323 "high_priority_weight": 0, 00:14:31.323 "nvme_adminq_poll_period_us": 10000, 00:14:31.323 "nvme_ioq_poll_period_us": 0, 00:14:31.323 "io_queue_requests": 0, 00:14:31.323 "delay_cmd_submit": true, 00:14:31.323 "transport_retry_count": 4, 00:14:31.323 "bdev_retry_count": 3, 00:14:31.323 "transport_ack_timeout": 0, 00:14:31.323 "ctrlr_loss_timeout_sec": 0, 00:14:31.323 "reconnect_delay_sec": 0, 00:14:31.323 "fast_io_fail_timeout_sec": 0, 00:14:31.323 "disable_auto_failback": false, 00:14:31.323 "generate_uuids": false, 00:14:31.323 "transport_tos": 0, 00:14:31.323 "nvme_error_stat": false, 00:14:31.323 "rdma_srq_size": 0, 00:14:31.323 "io_path_stat": false, 00:14:31.323 "allow_accel_sequence": false, 00:14:31.323 "rdma_max_cq_size": 0, 00:14:31.323 "rdma_cm_event_timeout_ms": 0, 00:14:31.323 "dhchap_digests": [ 00:14:31.323 "sha256", 00:14:31.323 "sha384", 00:14:31.323 "sha512" 00:14:31.323 ], 00:14:31.323 "dhchap_dhgroups": [ 00:14:31.323 "null", 00:14:31.323 "ffdhe2048", 00:14:31.323 "ffdhe3072", 00:14:31.323 "ffdhe4096", 00:14:31.323 "ffdhe6144", 00:14:31.323 "ffdhe8192" 00:14:31.323 ] 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_nvme_set_hotplug", 00:14:31.323 "params": { 00:14:31.323 "period_us": 100000, 00:14:31.323 "enable": false 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_malloc_create", 00:14:31.323 "params": { 00:14:31.323 "name": "malloc0", 00:14:31.323 "num_blocks": 8192, 00:14:31.323 "block_size": 4096, 00:14:31.323 "physical_block_size": 4096, 00:14:31.323 "uuid": "99138d41-e9ce-4e54-ac7b-e8398f2c0438", 00:14:31.323 "optimal_io_boundary": 0 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "bdev_wait_for_examine" 00:14:31.323 } 00:14:31.323 ] 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "subsystem": "nbd", 00:14:31.323 "config": [] 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "subsystem": "scheduler", 00:14:31.323 "config": [ 00:14:31.323 { 00:14:31.323 "method": "framework_set_scheduler", 00:14:31.323 "params": { 00:14:31.323 "name": "static" 00:14:31.323 } 00:14:31.323 } 00:14:31.323 ] 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "subsystem": "nvmf", 00:14:31.323 "config": [ 00:14:31.323 { 00:14:31.323 "method": "nvmf_set_config", 00:14:31.323 "params": { 00:14:31.323 "discovery_filter": "match_any", 00:14:31.323 "admin_cmd_passthru": { 00:14:31.323 "identify_ctrlr": false 00:14:31.323 } 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "nvmf_set_max_subsystems", 00:14:31.323 "params": { 00:14:31.323 "max_subsystems": 1024 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "nvmf_set_crdt", 00:14:31.323 "params": { 00:14:31.323 "crdt1": 0, 00:14:31.323 "crdt2": 0, 00:14:31.323 "crdt3": 0 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "nvmf_create_transport", 00:14:31.323 "params": { 00:14:31.323 "trtype": "TCP", 00:14:31.323 "max_queue_depth": 128, 00:14:31.323 "max_io_qpairs_per_ctrlr": 127, 00:14:31.323 "in_capsule_data_size": 4096, 00:14:31.323 "max_io_size": 131072, 00:14:31.323 "io_unit_size": 131072, 00:14:31.323 "max_aq_depth": 128, 00:14:31.323 "num_shared_buffers": 511, 00:14:31.323 "buf_cache_size": 4294967295, 00:14:31.323 "dif_insert_or_strip": false, 00:14:31.323 "zcopy": false, 00:14:31.323 "c2h_success": false, 00:14:31.323 "sock_priority": 0, 00:14:31.323 "abort_timeout_sec": 1, 00:14:31.323 "ack_timeout": 0, 00:14:31.323 "data_wr_pool_size": 0 00:14:31.323 } 00:14:31.323 }, 00:14:31.323 { 00:14:31.323 "method": "nvmf_create_subsystem", 00:14:31.323 "params": { 00:14:31.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.323 "allow_any_host": false, 00:14:31.323 "serial_number": "SPDK00000000000001", 00:14:31.324 "model_number": "SPDK bdev Controller", 00:14:31.324 "max_namespaces": 10, 00:14:31.324 "min_cntlid": 1, 00:14:31.324 "max_cntlid": 65519, 00:14:31.324 "ana_reporting": false 00:14:31.324 } 00:14:31.324 }, 00:14:31.324 { 00:14:31.324 "method": "nvmf_subsystem_add_host", 00:14:31.324 "params": { 00:14:31.324 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.324 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.324 "psk": "/tmp/tmp.PIZi588o2F" 00:14:31.324 } 00:14:31.324 }, 00:14:31.324 { 00:14:31.324 "method": "nvmf_subsystem_add_ns", 00:14:31.324 "params": { 00:14:31.324 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.324 "namespace": { 00:14:31.324 "nsid": 1, 00:14:31.324 "bdev_name": "malloc0", 00:14:31.324 "nguid": "99138D41E9CE4E54AC7BE8398F2C0438", 00:14:31.324 "uuid": "99138d41-e9ce-4e54-ac7b-e8398f2c0438", 00:14:31.324 "no_auto_visible": false 00:14:31.324 } 00:14:31.324 } 00:14:31.324 }, 00:14:31.324 { 00:14:31.324 "method": "nvmf_subsystem_add_listener", 00:14:31.324 "params": { 00:14:31.324 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.324 "listen_address": { 00:14:31.324 "trtype": "TCP", 00:14:31.324 "adrfam": "IPv4", 00:14:31.324 "traddr": "10.0.0.2", 00:14:31.324 "trsvcid": "4420" 00:14:31.324 }, 00:14:31.324 "secure_channel": true 00:14:31.324 } 00:14:31.324 } 00:14:31.324 ] 00:14:31.324 } 00:14:31.324 ] 00:14:31.324 }' 00:14:31.324 19:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:31.583 19:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:31.583 "subsystems": [ 00:14:31.583 { 00:14:31.583 "subsystem": "keyring", 00:14:31.583 "config": [] 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "subsystem": "iobuf", 00:14:31.583 "config": [ 00:14:31.583 { 00:14:31.583 "method": "iobuf_set_options", 00:14:31.583 "params": { 00:14:31.583 "small_pool_count": 8192, 00:14:31.583 "large_pool_count": 1024, 00:14:31.583 "small_bufsize": 8192, 00:14:31.583 "large_bufsize": 135168 00:14:31.583 } 00:14:31.583 } 00:14:31.583 ] 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "subsystem": "sock", 00:14:31.583 "config": [ 00:14:31.583 { 00:14:31.583 "method": "sock_set_default_impl", 00:14:31.583 "params": { 00:14:31.583 "impl_name": "uring" 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "sock_impl_set_options", 00:14:31.583 "params": { 00:14:31.583 "impl_name": "ssl", 00:14:31.583 "recv_buf_size": 4096, 00:14:31.583 "send_buf_size": 4096, 00:14:31.583 "enable_recv_pipe": true, 00:14:31.583 "enable_quickack": false, 00:14:31.583 "enable_placement_id": 0, 00:14:31.583 "enable_zerocopy_send_server": true, 00:14:31.583 "enable_zerocopy_send_client": false, 00:14:31.583 "zerocopy_threshold": 0, 00:14:31.583 "tls_version": 0, 00:14:31.583 "enable_ktls": false 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "sock_impl_set_options", 00:14:31.583 "params": { 00:14:31.583 "impl_name": "posix", 00:14:31.583 "recv_buf_size": 2097152, 00:14:31.583 "send_buf_size": 2097152, 00:14:31.583 "enable_recv_pipe": true, 00:14:31.583 "enable_quickack": false, 00:14:31.583 "enable_placement_id": 0, 00:14:31.583 "enable_zerocopy_send_server": true, 00:14:31.583 "enable_zerocopy_send_client": false, 00:14:31.583 "zerocopy_threshold": 0, 00:14:31.583 "tls_version": 0, 00:14:31.583 "enable_ktls": false 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "sock_impl_set_options", 00:14:31.583 "params": { 00:14:31.583 "impl_name": "uring", 00:14:31.583 "recv_buf_size": 2097152, 00:14:31.583 "send_buf_size": 2097152, 00:14:31.583 "enable_recv_pipe": true, 00:14:31.583 "enable_quickack": false, 00:14:31.583 "enable_placement_id": 0, 00:14:31.583 "enable_zerocopy_send_server": false, 00:14:31.583 "enable_zerocopy_send_client": false, 00:14:31.583 "zerocopy_threshold": 0, 00:14:31.583 "tls_version": 0, 00:14:31.583 "enable_ktls": false 00:14:31.583 } 00:14:31.583 } 00:14:31.583 ] 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "subsystem": "vmd", 00:14:31.583 "config": [] 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "subsystem": "accel", 00:14:31.583 "config": [ 00:14:31.583 { 00:14:31.583 "method": "accel_set_options", 00:14:31.583 "params": { 00:14:31.583 "small_cache_size": 128, 00:14:31.583 "large_cache_size": 16, 00:14:31.583 "task_count": 2048, 00:14:31.583 "sequence_count": 2048, 00:14:31.583 "buf_count": 2048 00:14:31.583 } 00:14:31.583 } 00:14:31.583 ] 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "subsystem": "bdev", 00:14:31.583 "config": [ 00:14:31.583 { 00:14:31.583 "method": "bdev_set_options", 00:14:31.583 "params": { 00:14:31.583 "bdev_io_pool_size": 65535, 00:14:31.583 "bdev_io_cache_size": 256, 00:14:31.583 "bdev_auto_examine": true, 00:14:31.583 "iobuf_small_cache_size": 128, 00:14:31.583 "iobuf_large_cache_size": 16 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "bdev_raid_set_options", 00:14:31.583 "params": { 00:14:31.583 "process_window_size_kb": 1024 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "bdev_iscsi_set_options", 00:14:31.583 "params": { 00:14:31.583 "timeout_sec": 30 00:14:31.583 } 00:14:31.583 }, 00:14:31.583 { 00:14:31.583 "method": "bdev_nvme_set_options", 00:14:31.583 "params": { 00:14:31.583 "action_on_timeout": "none", 00:14:31.583 "timeout_us": 0, 00:14:31.583 "timeout_admin_us": 0, 00:14:31.583 "keep_alive_timeout_ms": 10000, 00:14:31.583 "arbitration_burst": 0, 00:14:31.583 "low_priority_weight": 0, 00:14:31.584 "medium_priority_weight": 0, 00:14:31.584 "high_priority_weight": 0, 00:14:31.584 "nvme_adminq_poll_period_us": 10000, 00:14:31.584 "nvme_ioq_poll_period_us": 0, 00:14:31.584 "io_queue_requests": 512, 00:14:31.584 "delay_cmd_submit": true, 00:14:31.584 "transport_retry_count": 4, 00:14:31.584 "bdev_retry_count": 3, 00:14:31.584 "transport_ack_timeout": 0, 00:14:31.584 "ctrlr_loss_timeout_sec": 0, 00:14:31.584 "reconnect_delay_sec": 0, 00:14:31.584 "fast_io_fail_timeout_sec": 0, 00:14:31.584 "disable_auto_failback": false, 00:14:31.584 "generate_uuids": false, 00:14:31.584 "transport_tos": 0, 00:14:31.584 "nvme_error_stat": false, 00:14:31.584 "rdma_srq_size": 0, 00:14:31.584 "io_path_stat": false, 00:14:31.584 "allow_accel_sequence": false, 00:14:31.584 "rdma_max_cq_size": 0, 00:14:31.584 "rdma_cm_event_timeout_ms": 0, 00:14:31.584 "dhchap_digests": [ 00:14:31.584 "sha256", 00:14:31.584 "sha384", 00:14:31.584 "sha512" 00:14:31.584 ], 00:14:31.584 "dhchap_dhgroups": [ 00:14:31.584 "null", 00:14:31.584 "ffdhe2048", 00:14:31.584 "ffdhe3072", 00:14:31.584 "ffdhe4096", 00:14:31.584 "ffdhe6144", 00:14:31.584 "ffdhe8192" 00:14:31.584 ] 00:14:31.584 } 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "method": "bdev_nvme_attach_controller", 00:14:31.584 "params": { 00:14:31.584 "name": "TLSTEST", 00:14:31.584 "trtype": "TCP", 00:14:31.584 "adrfam": "IPv4", 00:14:31.584 "traddr": "10.0.0.2", 00:14:31.584 "trsvcid": "4420", 00:14:31.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.584 "prchk_reftag": false, 00:14:31.584 "prchk_guard": false, 00:14:31.584 "ctrlr_loss_timeout_sec": 0, 00:14:31.584 "reconnect_delay_sec": 0, 00:14:31.584 "fast_io_fail_timeout_sec": 0, 00:14:31.584 "psk": "/tmp/tmp.PIZi588o2F", 00:14:31.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.584 "hdgst": false, 00:14:31.584 "ddgst": false 00:14:31.584 } 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "method": "bdev_nvme_set_hotplug", 00:14:31.584 "params": { 00:14:31.584 "period_us": 100000, 00:14:31.584 "enable": false 00:14:31.584 } 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "method": "bdev_wait_for_examine" 00:14:31.584 } 00:14:31.584 ] 00:14:31.584 }, 00:14:31.584 { 00:14:31.584 "subsystem": "nbd", 00:14:31.584 "config": [] 00:14:31.584 } 00:14:31.584 ] 00:14:31.584 }' 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73766 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73766 ']' 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73766 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73766 00:14:31.584 killing process with pid 73766 00:14:31.584 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.584 00:14:31.584 Latency(us) 00:14:31.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.584 =================================================================================================================== 00:14:31.584 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73766' 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73766 00:14:31.584 [2024-07-15 19:04:58.654895] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:31.584 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73766 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73711 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73711 ']' 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73711 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73711 00:14:31.843 killing process with pid 73711 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73711' 00:14:31.843 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73711 00:14:31.844 [2024-07-15 19:04:58.896628] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:31.844 19:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73711 00:14:31.844 19:04:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:31.844 19:04:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:31.844 "subsystems": [ 00:14:31.844 { 00:14:31.844 "subsystem": "keyring", 00:14:31.844 "config": [] 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "subsystem": "iobuf", 00:14:31.844 "config": [ 00:14:31.844 { 00:14:31.844 "method": "iobuf_set_options", 00:14:31.844 "params": { 00:14:31.844 "small_pool_count": 8192, 00:14:31.844 "large_pool_count": 1024, 00:14:31.844 "small_bufsize": 8192, 00:14:31.844 "large_bufsize": 135168 00:14:31.844 } 00:14:31.844 } 00:14:31.844 ] 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "subsystem": "sock", 00:14:31.844 "config": [ 00:14:31.844 { 00:14:31.844 "method": "sock_set_default_impl", 00:14:31.844 "params": { 00:14:31.844 "impl_name": "uring" 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "sock_impl_set_options", 00:14:31.844 "params": { 00:14:31.844 "impl_name": "ssl", 00:14:31.844 "recv_buf_size": 4096, 00:14:31.844 "send_buf_size": 4096, 00:14:31.844 "enable_recv_pipe": true, 00:14:31.844 "enable_quickack": false, 00:14:31.844 "enable_placement_id": 0, 00:14:31.844 "enable_zerocopy_send_server": true, 00:14:31.844 "enable_zerocopy_send_client": false, 00:14:31.844 "zerocopy_threshold": 0, 00:14:31.844 "tls_version": 0, 00:14:31.844 "enable_ktls": false 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "sock_impl_set_options", 00:14:31.844 "params": { 00:14:31.844 "impl_name": "posix", 00:14:31.844 "recv_buf_size": 2097152, 00:14:31.844 "send_buf_size": 2097152, 00:14:31.844 "enable_recv_pipe": true, 00:14:31.844 "enable_quickack": false, 00:14:31.844 "enable_placement_id": 0, 00:14:31.844 "enable_zerocopy_send_server": true, 00:14:31.844 "enable_zerocopy_send_client": false, 00:14:31.844 "zerocopy_threshold": 0, 00:14:31.844 "tls_version": 0, 00:14:31.844 "enable_ktls": false 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "sock_impl_set_options", 00:14:31.844 "params": { 00:14:31.844 "impl_name": "uring", 00:14:31.844 "recv_buf_size": 2097152, 00:14:31.844 "send_buf_size": 2097152, 00:14:31.844 "enable_recv_pipe": true, 00:14:31.844 "enable_quickack": false, 00:14:31.844 "enable_placement_id": 0, 00:14:31.844 "enable_zerocopy_send_server": false, 00:14:31.844 "enable_zerocopy_send_client": false, 00:14:31.844 "zerocopy_threshold": 0, 00:14:31.844 "tls_version": 0, 00:14:31.844 "enable_ktls": false 00:14:31.844 } 00:14:31.844 } 00:14:31.844 ] 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "subsystem": "vmd", 00:14:31.844 "config": [] 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "subsystem": "accel", 00:14:31.844 "config": [ 00:14:31.844 { 00:14:31.844 "method": "accel_set_options", 00:14:31.844 "params": { 00:14:31.844 "small_cache_size": 128, 00:14:31.844 "large_cache_size": 16, 00:14:31.844 "task_count": 2048, 00:14:31.844 "sequence_count": 2048, 00:14:31.844 "buf_count": 2048 00:14:31.844 } 00:14:31.844 } 00:14:31.844 ] 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "subsystem": "bdev", 00:14:31.844 "config": [ 00:14:31.844 { 00:14:31.844 "method": "bdev_set_options", 00:14:31.844 "params": { 00:14:31.844 "bdev_io_pool_size": 65535, 00:14:31.844 "bdev_io_cache_size": 256, 00:14:31.844 "bdev_auto_examine": true, 00:14:31.844 "iobuf_small_cache_size": 128, 00:14:31.844 "iobuf_large_cache_size": 16 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "bdev_raid_set_options", 00:14:31.844 "params": { 00:14:31.844 "process_window_size_kb": 1024 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "bdev_iscsi_set_options", 00:14:31.844 "params": { 00:14:31.844 "timeout_sec": 30 00:14:31.844 } 00:14:31.844 }, 00:14:31.844 { 00:14:31.844 "method": "bdev_nvme_set_options", 00:14:31.844 "params": { 00:14:31.844 "action_on_timeout": "none", 00:14:31.844 "timeout_us": 0, 00:14:31.844 "timeout_admin_us": 0, 00:14:31.844 "keep_alive_timeout_ms": 10000, 00:14:31.844 "arbitration_burst": 0, 00:14:31.844 "low_priority_weight": 0, 00:14:31.844 "medium_priority_weight": 0, 00:14:31.844 "high_priority_weight": 0, 00:14:31.844 "nvme_adminq_poll_period_us": 10000, 00:14:31.844 "nvme_ioq_poll_period_us": 0, 00:14:31.844 "io_queue_requests": 0, 00:14:31.844 "delay_cmd_submit": true, 00:14:31.844 "transport_retry_count": 4, 00:14:31.844 "bdev_retry_count": 3, 00:14:31.844 "transport_ack_timeout": 0, 00:14:31.844 "ctrlr_loss_timeout_sec": 0, 00:14:31.844 "reconnect_delay_sec": 0, 00:14:31.844 "fast_io_fail_timeout_sec": 0, 00:14:31.844 "disable_auto_failback": false, 00:14:31.844 "generate_uuids": false, 00:14:31.844 "transport_tos": 0, 00:14:31.844 "nvme_error_stat": false, 00:14:31.844 "rdma_srq_size": 0, 00:14:31.844 "io_path_stat": false, 00:14:31.844 "allow_accel_sequence": false, 00:14:31.844 "rdma_max_cq_size": 0, 00:14:31.844 "rdma_cm_event_timeout_ms": 0, 00:14:31.844 "dhchap_digests": [ 00:14:31.844 "sha256", 00:14:31.844 "sha384", 00:14:31.844 "sha512" 00:14:31.844 ], 00:14:31.844 "dhchap_dhgroups": [ 00:14:31.844 "null", 00:14:31.844 "ffdhe2048", 00:14:31.844 "ffdhe3072", 00:14:31.844 "ffdhe4096", 00:14:31.844 "ffdhe6144", 00:14:31.844 "ffdhe8192" 00:14:31.844 ] 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "bdev_nvme_set_hotplug", 00:14:31.845 "params": { 00:14:31.845 "period_us": 100000, 00:14:31.845 "enable": false 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "bdev_malloc_create", 00:14:31.845 "params": { 00:14:31.845 "name": "malloc0", 00:14:31.845 "num_blocks": 8192, 00:14:31.845 "block_size": 4096, 00:14:31.845 "physical_block_size": 4096, 00:14:31.845 "uuid": "99138d41-e9ce-4e54-ac7b-e8398f2c0438", 00:14:31.845 "optimal_io_boundary": 0 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "bdev_wait_for_examine" 00:14:31.845 } 00:14:31.845 ] 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "subsystem": "nbd", 00:14:31.845 "config": [] 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "subsystem": "scheduler", 00:14:31.845 "config": [ 00:14:31.845 { 00:14:31.845 "method": "framework_set_scheduler", 00:14:31.845 "params": { 00:14:31.845 "name": "static" 00:14:31.845 } 00:14:31.845 } 00:14:31.845 ] 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "subsystem": "nvmf", 00:14:31.845 "config": [ 00:14:31.845 { 00:14:31.845 "method": "nvmf_set_config", 00:14:31.845 "params": { 00:14:31.845 "discovery_filter": "match_any", 00:14:31.845 "admin_cmd_passthru": { 00:14:31.845 "identify_ctrlr": false 00:14:31.845 } 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_set_max_subsystems", 00:14:31.845 "params": { 00:14:31.845 "max_subsystems": 1024 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_set_crdt", 00:14:31.845 "params": { 00:14:31.845 "crdt1": 0, 00:14:31.845 "crdt2": 0, 00:14:31.845 "crdt3": 0 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_create_transport", 00:14:31.845 "params": { 00:14:31.845 "trtype": "TCP", 00:14:31.845 "max_queue_depth": 128, 00:14:31.845 "max_io_qpairs_per_ctrlr": 127, 00:14:31.845 "in_capsule_data_size": 4096, 00:14:31.845 "max_io_size": 131072, 00:14:31.845 "io_unit_size": 131072, 00:14:31.845 "max_aq_depth": 128, 00:14:31.845 "num_shared_buffers": 511, 00:14:31.845 "buf_cache_size": 4294967295, 00:14:31.845 "dif_insert_or_strip": false, 00:14:31.845 "zcopy": false, 00:14:31.845 "c2h_success": false, 00:14:31.845 "sock_priority": 0, 00:14:31.845 "abort_timeout_sec": 1, 00:14:31.845 "ack_timeout": 0, 00:14:31.845 "data_wr_pool_size": 0 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_create_subsystem", 00:14:31.845 "params": { 00:14:31.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.845 "allow_any_host": false, 00:14:31.845 "serial_number": "SPDK00000000000001", 00:14:31.845 "model_number": "SPDK bdev Controller", 00:14:31.845 "max_namespaces": 10, 00:14:31.845 "min_cntlid": 1, 00:14:31.845 "max_cntlid": 65519, 00:14:31.845 "ana_reporting": false 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_subsystem_add_host", 00:14:31.845 "params": { 00:14:31.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.845 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.845 "psk": "/tmp/tmp.PIZi588o2F" 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_subsystem_add_ns", 00:14:31.845 "params": { 00:14:31.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.845 "namespace": { 00:14:31.845 "nsid": 1, 00:14:31.845 "bdev_name": "malloc0", 00:14:31.845 "nguid": "99138D41E9CE4E54AC7BE8398F2C0438", 00:14:31.845 "uuid": "99138d41-e9ce-4e54-ac7b-e8398f2c0438", 00:14:31.845 "no_auto_visible": false 00:14:31.845 } 00:14:31.845 } 00:14:31.845 }, 00:14:31.845 { 00:14:31.845 "method": "nvmf_subsystem_add_listener", 00:14:31.845 "params": { 00:14:31.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.845 "listen_address": { 00:14:31.845 "trtype": "TCP", 00:14:31.845 "adrfam": "IPv4", 00:14:31.845 "traddr": "10.0.0.2", 00:14:31.845 "trsvcid": "4420" 00:14:31.845 }, 00:14:31.845 "secure_channel": true 00:14:31.845 } 00:14:31.845 } 00:14:31.845 ] 00:14:31.845 } 00:14:31.845 ] 00:14:31.845 }' 00:14:31.845 19:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.845 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.845 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73809 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73809 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73809 ']' 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.105 19:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.105 [2024-07-15 19:04:59.192919] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:32.105 [2024-07-15 19:04:59.193017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.105 [2024-07-15 19:04:59.333502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.364 [2024-07-15 19:04:59.432003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.364 [2024-07-15 19:04:59.432067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.364 [2024-07-15 19:04:59.432080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.364 [2024-07-15 19:04:59.432089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.364 [2024-07-15 19:04:59.432097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.364 [2024-07-15 19:04:59.432181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.364 [2024-07-15 19:04:59.600486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.622 [2024-07-15 19:04:59.669875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.622 [2024-07-15 19:04:59.685791] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:32.622 [2024-07-15 19:04:59.701793] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.622 [2024-07-15 19:04:59.702020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73846 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73846 /var/tmp/bdevperf.sock 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73846 ']' 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:33.190 19:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:33.190 "subsystems": [ 00:14:33.190 { 00:14:33.190 "subsystem": "keyring", 00:14:33.190 "config": [] 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "subsystem": "iobuf", 00:14:33.190 "config": [ 00:14:33.190 { 00:14:33.190 "method": "iobuf_set_options", 00:14:33.190 "params": { 00:14:33.190 "small_pool_count": 8192, 00:14:33.190 "large_pool_count": 1024, 00:14:33.190 "small_bufsize": 8192, 00:14:33.190 "large_bufsize": 135168 00:14:33.190 } 00:14:33.190 } 00:14:33.190 ] 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "subsystem": "sock", 00:14:33.190 "config": [ 00:14:33.190 { 00:14:33.190 "method": "sock_set_default_impl", 00:14:33.190 "params": { 00:14:33.190 "impl_name": "uring" 00:14:33.190 } 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "method": "sock_impl_set_options", 00:14:33.190 "params": { 00:14:33.190 "impl_name": "ssl", 00:14:33.190 "recv_buf_size": 4096, 00:14:33.190 "send_buf_size": 4096, 00:14:33.190 "enable_recv_pipe": true, 00:14:33.190 "enable_quickack": false, 00:14:33.190 "enable_placement_id": 0, 00:14:33.190 "enable_zerocopy_send_server": true, 00:14:33.190 "enable_zerocopy_send_client": false, 00:14:33.190 "zerocopy_threshold": 0, 00:14:33.190 "tls_version": 0, 00:14:33.190 "enable_ktls": false 00:14:33.190 } 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "method": "sock_impl_set_options", 00:14:33.190 "params": { 00:14:33.190 "impl_name": "posix", 00:14:33.190 "recv_buf_size": 2097152, 00:14:33.190 "send_buf_size": 2097152, 00:14:33.190 "enable_recv_pipe": true, 00:14:33.190 "enable_quickack": false, 00:14:33.190 "enable_placement_id": 0, 00:14:33.190 "enable_zerocopy_send_server": true, 00:14:33.190 "enable_zerocopy_send_client": false, 00:14:33.190 "zerocopy_threshold": 0, 00:14:33.190 "tls_version": 0, 00:14:33.190 "enable_ktls": false 00:14:33.190 } 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "method": "sock_impl_set_options", 00:14:33.190 "params": { 00:14:33.190 "impl_name": "uring", 00:14:33.190 "recv_buf_size": 2097152, 00:14:33.190 "send_buf_size": 2097152, 00:14:33.190 "enable_recv_pipe": true, 00:14:33.190 "enable_quickack": false, 00:14:33.190 "enable_placement_id": 0, 00:14:33.190 "enable_zerocopy_send_server": false, 00:14:33.190 "enable_zerocopy_send_client": false, 00:14:33.190 "zerocopy_threshold": 0, 00:14:33.190 "tls_version": 0, 00:14:33.190 "enable_ktls": false 00:14:33.190 } 00:14:33.190 } 00:14:33.190 ] 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "subsystem": "vmd", 00:14:33.190 "config": [] 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "subsystem": "accel", 00:14:33.190 "config": [ 00:14:33.190 { 00:14:33.190 "method": "accel_set_options", 00:14:33.190 "params": { 00:14:33.190 "small_cache_size": 128, 00:14:33.190 "large_cache_size": 16, 00:14:33.190 "task_count": 2048, 00:14:33.190 "sequence_count": 2048, 00:14:33.190 "buf_count": 2048 00:14:33.190 } 00:14:33.190 } 00:14:33.190 ] 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "subsystem": "bdev", 00:14:33.190 "config": [ 00:14:33.190 { 00:14:33.190 "method": "bdev_set_options", 00:14:33.190 "params": { 00:14:33.190 "bdev_io_pool_size": 65535, 00:14:33.190 "bdev_io_cache_size": 256, 00:14:33.190 "bdev_auto_examine": true, 00:14:33.190 "iobuf_small_cache_size": 128, 00:14:33.190 "iobuf_large_cache_size": 16 00:14:33.190 } 00:14:33.190 }, 00:14:33.190 { 00:14:33.190 "method": "bdev_raid_set_options", 00:14:33.190 "params": { 00:14:33.190 "process_window_size_kb": 1024 00:14:33.190 } 00:14:33.190 }, 00:14:33.191 { 00:14:33.191 "method": "bdev_iscsi_set_options", 00:14:33.191 "params": { 00:14:33.191 "timeout_sec": 30 00:14:33.191 } 00:14:33.191 }, 00:14:33.191 { 00:14:33.191 "method": "bdev_nvme_set_options", 00:14:33.191 "params": { 00:14:33.191 "action_on_timeout": "none", 00:14:33.191 "timeout_us": 0, 00:14:33.191 "timeout_admin_us": 0, 00:14:33.191 "keep_alive_timeout_ms": 10000, 00:14:33.191 "arbitration_burst": 0, 00:14:33.191 "low_priority_weight": 0, 00:14:33.191 "medium_priority_weight": 0, 00:14:33.191 "high_priority_weight": 0, 00:14:33.191 "nvme_adminq_poll_period_us": 10000, 00:14:33.191 "nvme_ioq_poll_period_us": 0, 00:14:33.191 "io_queue_requests": 512, 00:14:33.191 "delay_cmd_submit": true, 00:14:33.191 "transport_retry_count": 4, 00:14:33.191 "bdev_retry_count": 3, 00:14:33.191 "transport_ack_timeout": 0, 00:14:33.191 "ctrlr_loss_timeout_sec": 0, 00:14:33.191 "reconnect_delay_sec": 0, 00:14:33.191 "fast_io_fail_timeout_sec": 0, 00:14:33.191 "disable_auto_failback": false, 00:14:33.191 "generate_uuids": false, 00:14:33.191 "transport_tos": 0, 00:14:33.191 "nvme_error_stat": false, 00:14:33.191 "rdma_srq_size": 0, 00:14:33.191 "io_path_stat": false, 00:14:33.191 "allow_accel_sequence": false, 00:14:33.191 "rdma_max_cq_size": 0, 00:14:33.191 "rdma_cm_event_timeout_ms": 0, 00:14:33.191 "dhchap_digests": [ 00:14:33.191 "sha256", 00:14:33.191 "sha384", 00:14:33.191 "sha512" 00:14:33.191 ], 00:14:33.191 "dhchap_dhgroups": [ 00:14:33.191 "null", 00:14:33.191 "ffdhe2048", 00:14:33.191 "ffdhe3072", 00:14:33.191 "ffdhe4096", 00:14:33.191 "ffdhe6144", 00:14:33.191 "ffdhe8192" 00:14:33.191 ] 00:14:33.191 } 00:14:33.191 }, 00:14:33.191 { 00:14:33.191 "method": "bdev_nvme_attach_controller", 00:14:33.191 "params": { 00:14:33.191 "name": "TLSTEST", 00:14:33.191 "trtype": "TCP", 00:14:33.191 "adrfam": "IPv4", 00:14:33.191 "traddr": "10.0.0.2", 00:14:33.191 "trsvcid": "4420", 00:14:33.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.191 "prchk_reftag": false, 00:14:33.191 "prchk_guard": false, 00:14:33.191 "ctrlr_loss_timeout_sec": 0, 00:14:33.191 "reconnect_delay_sec": 0, 00:14:33.191 "fast_io_fail_timeout_sec": 0, 00:14:33.191 "psk": "/tmp/tmp.PIZi588o2F", 00:14:33.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.191 "hdgst": false, 00:14:33.191 "ddgst": false 00:14:33.191 } 00:14:33.191 }, 00:14:33.191 { 00:14:33.191 "method": "bdev_nvme_set_hotplug", 00:14:33.191 "params": { 00:14:33.191 "period_us": 100000, 00:14:33.191 "enable": false 00:14:33.191 } 00:14:33.191 }, 00:14:33.191 { 00:14:33.191 "method": "bdev_wait_for_examine" 00:14:33.191 } 00:14:33.191 ] 00:14:33.191 }, 00:14:33.191 { 00:14:33.191 "subsystem": "nbd", 00:14:33.191 "config": [] 00:14:33.191 } 00:14:33.191 ] 00:14:33.191 }' 00:14:33.191 [2024-07-15 19:05:00.279340] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:33.191 [2024-07-15 19:05:00.279442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:14:33.191 [2024-07-15 19:05:00.414538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.449 [2024-07-15 19:05:00.529695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.449 [2024-07-15 19:05:00.666319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.449 [2024-07-15 19:05:00.704964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.449 [2024-07-15 19:05:00.705255] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:34.015 19:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.015 19:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:34.015 19:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:34.274 Running I/O for 10 seconds... 00:14:44.258 00:14:44.258 Latency(us) 00:14:44.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.258 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:44.258 Verification LBA range: start 0x0 length 0x2000 00:14:44.258 TLSTESTn1 : 10.02 4015.48 15.69 0.00 0.00 31812.86 7060.01 37653.41 00:14:44.258 =================================================================================================================== 00:14:44.258 Total : 4015.48 15.69 0.00 0.00 31812.86 7060.01 37653.41 00:14:44.258 0 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73846 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73846 ']' 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73846 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73846 00:14:44.259 killing process with pid 73846 00:14:44.259 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.259 00:14:44.259 Latency(us) 00:14:44.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.259 =================================================================================================================== 00:14:44.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73846' 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73846 00:14:44.259 [2024-07-15 19:05:11.412742] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:44.259 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73846 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73809 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73809 ']' 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73809 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73809 00:14:44.517 killing process with pid 73809 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73809' 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73809 00:14:44.517 [2024-07-15 19:05:11.660365] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:44.517 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73809 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73980 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73980 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73980 ']' 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.775 19:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.775 [2024-07-15 19:05:11.968856] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:44.775 [2024-07-15 19:05:11.969429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.034 [2024-07-15 19:05:12.109172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.034 [2024-07-15 19:05:12.223099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.034 [2024-07-15 19:05:12.223144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.034 [2024-07-15 19:05:12.223154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.034 [2024-07-15 19:05:12.223162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.034 [2024-07-15 19:05:12.223169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.034 [2024-07-15 19:05:12.223192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.034 [2024-07-15 19:05:12.279440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.PIZi588o2F 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PIZi588o2F 00:14:45.969 19:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:45.969 [2024-07-15 19:05:13.245812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.227 19:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:46.484 19:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:46.743 [2024-07-15 19:05:13.810061] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:46.743 [2024-07-15 19:05:13.810304] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.743 19:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:47.001 malloc0 00:14:47.001 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PIZi588o2F 00:14:47.258 [2024-07-15 19:05:14.510471] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74037 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74037 /var/tmp/bdevperf.sock 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74037 ']' 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.258 19:05:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.516 [2024-07-15 19:05:14.584748] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:47.516 [2024-07-15 19:05:14.585491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74037 ] 00:14:47.516 [2024-07-15 19:05:14.728429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.804 [2024-07-15 19:05:14.840745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.804 [2024-07-15 19:05:14.897254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.371 19:05:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.371 19:05:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:48.371 19:05:15 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PIZi588o2F 00:14:48.630 19:05:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:48.889 [2024-07-15 19:05:16.015422] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.889 nvme0n1 00:14:48.889 19:05:16 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.148 Running I/O for 1 seconds... 00:14:50.085 00:14:50.085 Latency(us) 00:14:50.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:50.085 Verification LBA range: start 0x0 length 0x2000 00:14:50.085 nvme0n1 : 1.03 2977.07 11.63 0.00 0.00 42378.63 10009.13 27882.59 00:14:50.085 =================================================================================================================== 00:14:50.085 Total : 2977.07 11.63 0.00 0.00 42378.63 10009.13 27882.59 00:14:50.085 0 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74037 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74037 ']' 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74037 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74037 00:14:50.085 killing process with pid 74037 00:14:50.085 Received shutdown signal, test time was about 1.000000 seconds 00:14:50.085 00:14:50.085 Latency(us) 00:14:50.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.085 =================================================================================================================== 00:14:50.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74037' 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74037 00:14:50.085 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74037 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73980 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73980 ']' 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73980 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73980 00:14:50.344 killing process with pid 73980 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73980' 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73980 00:14:50.344 [2024-07-15 19:05:17.547028] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:50.344 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73980 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74087 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74087 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74087 ']' 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.912 19:05:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.912 [2024-07-15 19:05:17.986278] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:50.912 [2024-07-15 19:05:17.986428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.912 [2024-07-15 19:05:18.124975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.170 [2024-07-15 19:05:18.295648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.170 [2024-07-15 19:05:18.295729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.170 [2024-07-15 19:05:18.295747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.170 [2024-07-15 19:05:18.295756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.170 [2024-07-15 19:05:18.295764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.170 [2024-07-15 19:05:18.295795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.170 [2024-07-15 19:05:18.372615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:51.741 19:05:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.741 19:05:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:51.741 19:05:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.741 19:05:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.741 19:05:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.741 19:05:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.741 19:05:19 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:51.741 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.741 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.741 [2024-07-15 19:05:19.014535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.999 malloc0 00:14:51.999 [2024-07-15 19:05:19.053192] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.999 [2024-07-15 19:05:19.053608] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=74119 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 74119 /var/tmp/bdevperf.sock 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74119 ']' 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.999 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.000 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.000 19:05:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.000 [2024-07-15 19:05:19.167139] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:52.000 [2024-07-15 19:05:19.167638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74119 ] 00:14:52.258 [2024-07-15 19:05:19.319421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.258 [2024-07-15 19:05:19.436636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.258 [2024-07-15 19:05:19.496362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:53.197 19:05:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.197 19:05:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:53.197 19:05:20 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PIZi588o2F 00:14:53.197 19:05:20 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:53.456 [2024-07-15 19:05:20.631821] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.456 nvme0n1 00:14:53.456 19:05:20 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.714 Running I/O for 1 seconds... 00:14:54.652 00:14:54.652 Latency(us) 00:14:54.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.652 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.652 Verification LBA range: start 0x0 length 0x2000 00:14:54.652 nvme0n1 : 1.02 3671.26 14.34 0.00 0.00 34400.70 3544.90 21090.68 00:14:54.652 =================================================================================================================== 00:14:54.652 Total : 3671.26 14.34 0.00 0.00 34400.70 3544.90 21090.68 00:14:54.652 0 00:14:54.652 19:05:21 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:54.652 19:05:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.652 19:05:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.912 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.912 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:54.912 "subsystems": [ 00:14:54.912 { 00:14:54.912 "subsystem": "keyring", 00:14:54.912 "config": [ 00:14:54.912 { 00:14:54.912 "method": "keyring_file_add_key", 00:14:54.912 "params": { 00:14:54.912 "name": "key0", 00:14:54.912 "path": "/tmp/tmp.PIZi588o2F" 00:14:54.912 } 00:14:54.912 } 00:14:54.912 ] 00:14:54.912 }, 00:14:54.912 { 00:14:54.912 "subsystem": "iobuf", 00:14:54.912 "config": [ 00:14:54.912 { 00:14:54.912 "method": "iobuf_set_options", 00:14:54.912 "params": { 00:14:54.912 "small_pool_count": 8192, 00:14:54.912 "large_pool_count": 1024, 00:14:54.912 "small_bufsize": 8192, 00:14:54.912 "large_bufsize": 135168 00:14:54.912 } 00:14:54.912 } 00:14:54.912 ] 00:14:54.912 }, 00:14:54.912 { 00:14:54.912 "subsystem": "sock", 00:14:54.912 "config": [ 00:14:54.912 { 00:14:54.912 "method": "sock_set_default_impl", 00:14:54.912 "params": { 00:14:54.912 "impl_name": "uring" 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "sock_impl_set_options", 00:14:54.913 "params": { 00:14:54.913 "impl_name": "ssl", 00:14:54.913 "recv_buf_size": 4096, 00:14:54.913 "send_buf_size": 4096, 00:14:54.913 "enable_recv_pipe": true, 00:14:54.913 "enable_quickack": false, 00:14:54.913 "enable_placement_id": 0, 00:14:54.913 "enable_zerocopy_send_server": true, 00:14:54.913 "enable_zerocopy_send_client": false, 00:14:54.913 "zerocopy_threshold": 0, 00:14:54.913 "tls_version": 0, 00:14:54.913 "enable_ktls": false 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "sock_impl_set_options", 00:14:54.913 "params": { 00:14:54.913 "impl_name": "posix", 00:14:54.913 "recv_buf_size": 2097152, 00:14:54.913 "send_buf_size": 2097152, 00:14:54.913 "enable_recv_pipe": true, 00:14:54.913 "enable_quickack": false, 00:14:54.913 "enable_placement_id": 0, 00:14:54.913 "enable_zerocopy_send_server": true, 00:14:54.913 "enable_zerocopy_send_client": false, 00:14:54.913 "zerocopy_threshold": 0, 00:14:54.913 "tls_version": 0, 00:14:54.913 "enable_ktls": false 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "sock_impl_set_options", 00:14:54.913 "params": { 00:14:54.913 "impl_name": "uring", 00:14:54.913 "recv_buf_size": 2097152, 00:14:54.913 "send_buf_size": 2097152, 00:14:54.913 "enable_recv_pipe": true, 00:14:54.913 "enable_quickack": false, 00:14:54.913 "enable_placement_id": 0, 00:14:54.913 "enable_zerocopy_send_server": false, 00:14:54.913 "enable_zerocopy_send_client": false, 00:14:54.913 "zerocopy_threshold": 0, 00:14:54.913 "tls_version": 0, 00:14:54.913 "enable_ktls": false 00:14:54.913 } 00:14:54.913 } 00:14:54.913 ] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "vmd", 00:14:54.913 "config": [] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "accel", 00:14:54.913 "config": [ 00:14:54.913 { 00:14:54.913 "method": "accel_set_options", 00:14:54.913 "params": { 00:14:54.913 "small_cache_size": 128, 00:14:54.913 "large_cache_size": 16, 00:14:54.913 "task_count": 2048, 00:14:54.913 "sequence_count": 2048, 00:14:54.913 "buf_count": 2048 00:14:54.913 } 00:14:54.913 } 00:14:54.913 ] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "bdev", 00:14:54.913 "config": [ 00:14:54.913 { 00:14:54.913 "method": "bdev_set_options", 00:14:54.913 "params": { 00:14:54.913 "bdev_io_pool_size": 65535, 00:14:54.913 "bdev_io_cache_size": 256, 00:14:54.913 "bdev_auto_examine": true, 00:14:54.913 "iobuf_small_cache_size": 128, 00:14:54.913 "iobuf_large_cache_size": 16 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_raid_set_options", 00:14:54.913 "params": { 00:14:54.913 "process_window_size_kb": 1024 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_iscsi_set_options", 00:14:54.913 "params": { 00:14:54.913 "timeout_sec": 30 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_nvme_set_options", 00:14:54.913 "params": { 00:14:54.913 "action_on_timeout": "none", 00:14:54.913 "timeout_us": 0, 00:14:54.913 "timeout_admin_us": 0, 00:14:54.913 "keep_alive_timeout_ms": 10000, 00:14:54.913 "arbitration_burst": 0, 00:14:54.913 "low_priority_weight": 0, 00:14:54.913 "medium_priority_weight": 0, 00:14:54.913 "high_priority_weight": 0, 00:14:54.913 "nvme_adminq_poll_period_us": 10000, 00:14:54.913 "nvme_ioq_poll_period_us": 0, 00:14:54.913 "io_queue_requests": 0, 00:14:54.913 "delay_cmd_submit": true, 00:14:54.913 "transport_retry_count": 4, 00:14:54.913 "bdev_retry_count": 3, 00:14:54.913 "transport_ack_timeout": 0, 00:14:54.913 "ctrlr_loss_timeout_sec": 0, 00:14:54.913 "reconnect_delay_sec": 0, 00:14:54.913 "fast_io_fail_timeout_sec": 0, 00:14:54.913 "disable_auto_failback": false, 00:14:54.913 "generate_uuids": false, 00:14:54.913 "transport_tos": 0, 00:14:54.913 "nvme_error_stat": false, 00:14:54.913 "rdma_srq_size": 0, 00:14:54.913 "io_path_stat": false, 00:14:54.913 "allow_accel_sequence": false, 00:14:54.913 "rdma_max_cq_size": 0, 00:14:54.913 "rdma_cm_event_timeout_ms": 0, 00:14:54.913 "dhchap_digests": [ 00:14:54.913 "sha256", 00:14:54.913 "sha384", 00:14:54.913 "sha512" 00:14:54.913 ], 00:14:54.913 "dhchap_dhgroups": [ 00:14:54.913 "null", 00:14:54.913 "ffdhe2048", 00:14:54.913 "ffdhe3072", 00:14:54.913 "ffdhe4096", 00:14:54.913 "ffdhe6144", 00:14:54.913 "ffdhe8192" 00:14:54.913 ] 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_nvme_set_hotplug", 00:14:54.913 "params": { 00:14:54.913 "period_us": 100000, 00:14:54.913 "enable": false 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_malloc_create", 00:14:54.913 "params": { 00:14:54.913 "name": "malloc0", 00:14:54.913 "num_blocks": 8192, 00:14:54.913 "block_size": 4096, 00:14:54.913 "physical_block_size": 4096, 00:14:54.913 "uuid": "e1a36a12-3fd2-4c9b-af3f-b086058865ad", 00:14:54.913 "optimal_io_boundary": 0 00:14:54.913 } 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "method": "bdev_wait_for_examine" 00:14:54.913 } 00:14:54.913 ] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "nbd", 00:14:54.913 "config": [] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "scheduler", 00:14:54.913 "config": [ 00:14:54.913 { 00:14:54.913 "method": "framework_set_scheduler", 00:14:54.913 "params": { 00:14:54.913 "name": "static" 00:14:54.913 } 00:14:54.913 } 00:14:54.913 ] 00:14:54.913 }, 00:14:54.913 { 00:14:54.913 "subsystem": "nvmf", 00:14:54.913 "config": [ 00:14:54.913 { 00:14:54.913 "method": "nvmf_set_config", 00:14:54.913 "params": { 00:14:54.914 "discovery_filter": "match_any", 00:14:54.914 "admin_cmd_passthru": { 00:14:54.914 "identify_ctrlr": false 00:14:54.914 } 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_set_max_subsystems", 00:14:54.914 "params": { 00:14:54.914 "max_subsystems": 1024 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_set_crdt", 00:14:54.914 "params": { 00:14:54.914 "crdt1": 0, 00:14:54.914 "crdt2": 0, 00:14:54.914 "crdt3": 0 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_create_transport", 00:14:54.914 "params": { 00:14:54.914 "trtype": "TCP", 00:14:54.914 "max_queue_depth": 128, 00:14:54.914 "max_io_qpairs_per_ctrlr": 127, 00:14:54.914 "in_capsule_data_size": 4096, 00:14:54.914 "max_io_size": 131072, 00:14:54.914 "io_unit_size": 131072, 00:14:54.914 "max_aq_depth": 128, 00:14:54.914 "num_shared_buffers": 511, 00:14:54.914 "buf_cache_size": 4294967295, 00:14:54.914 "dif_insert_or_strip": false, 00:14:54.914 "zcopy": false, 00:14:54.914 "c2h_success": false, 00:14:54.914 "sock_priority": 0, 00:14:54.914 "abort_timeout_sec": 1, 00:14:54.914 "ack_timeout": 0, 00:14:54.914 "data_wr_pool_size": 0 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_create_subsystem", 00:14:54.914 "params": { 00:14:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.914 "allow_any_host": false, 00:14:54.914 "serial_number": "00000000000000000000", 00:14:54.914 "model_number": "SPDK bdev Controller", 00:14:54.914 "max_namespaces": 32, 00:14:54.914 "min_cntlid": 1, 00:14:54.914 "max_cntlid": 65519, 00:14:54.914 "ana_reporting": false 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_subsystem_add_host", 00:14:54.914 "params": { 00:14:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.914 "host": "nqn.2016-06.io.spdk:host1", 00:14:54.914 "psk": "key0" 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_subsystem_add_ns", 00:14:54.914 "params": { 00:14:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.914 "namespace": { 00:14:54.914 "nsid": 1, 00:14:54.914 "bdev_name": "malloc0", 00:14:54.914 "nguid": "E1A36A123FD24C9BAF3FB086058865AD", 00:14:54.914 "uuid": "e1a36a12-3fd2-4c9b-af3f-b086058865ad", 00:14:54.914 "no_auto_visible": false 00:14:54.914 } 00:14:54.914 } 00:14:54.914 }, 00:14:54.914 { 00:14:54.914 "method": "nvmf_subsystem_add_listener", 00:14:54.914 "params": { 00:14:54.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.914 "listen_address": { 00:14:54.914 "trtype": "TCP", 00:14:54.914 "adrfam": "IPv4", 00:14:54.914 "traddr": "10.0.0.2", 00:14:54.914 "trsvcid": "4420" 00:14:54.914 }, 00:14:54.914 "secure_channel": false, 00:14:54.914 "sock_impl": "ssl" 00:14:54.914 } 00:14:54.914 } 00:14:54.914 ] 00:14:54.914 } 00:14:54.914 ] 00:14:54.914 }' 00:14:54.914 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:55.174 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:55.174 "subsystems": [ 00:14:55.174 { 00:14:55.174 "subsystem": "keyring", 00:14:55.174 "config": [ 00:14:55.174 { 00:14:55.174 "method": "keyring_file_add_key", 00:14:55.174 "params": { 00:14:55.174 "name": "key0", 00:14:55.174 "path": "/tmp/tmp.PIZi588o2F" 00:14:55.174 } 00:14:55.174 } 00:14:55.174 ] 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "subsystem": "iobuf", 00:14:55.174 "config": [ 00:14:55.174 { 00:14:55.174 "method": "iobuf_set_options", 00:14:55.174 "params": { 00:14:55.174 "small_pool_count": 8192, 00:14:55.174 "large_pool_count": 1024, 00:14:55.174 "small_bufsize": 8192, 00:14:55.174 "large_bufsize": 135168 00:14:55.174 } 00:14:55.174 } 00:14:55.174 ] 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "subsystem": "sock", 00:14:55.174 "config": [ 00:14:55.174 { 00:14:55.174 "method": "sock_set_default_impl", 00:14:55.174 "params": { 00:14:55.174 "impl_name": "uring" 00:14:55.174 } 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "method": "sock_impl_set_options", 00:14:55.174 "params": { 00:14:55.174 "impl_name": "ssl", 00:14:55.174 "recv_buf_size": 4096, 00:14:55.174 "send_buf_size": 4096, 00:14:55.174 "enable_recv_pipe": true, 00:14:55.174 "enable_quickack": false, 00:14:55.174 "enable_placement_id": 0, 00:14:55.174 "enable_zerocopy_send_server": true, 00:14:55.174 "enable_zerocopy_send_client": false, 00:14:55.174 "zerocopy_threshold": 0, 00:14:55.174 "tls_version": 0, 00:14:55.174 "enable_ktls": false 00:14:55.174 } 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "method": "sock_impl_set_options", 00:14:55.174 "params": { 00:14:55.174 "impl_name": "posix", 00:14:55.174 "recv_buf_size": 2097152, 00:14:55.174 "send_buf_size": 2097152, 00:14:55.174 "enable_recv_pipe": true, 00:14:55.174 "enable_quickack": false, 00:14:55.174 "enable_placement_id": 0, 00:14:55.174 "enable_zerocopy_send_server": true, 00:14:55.174 "enable_zerocopy_send_client": false, 00:14:55.174 "zerocopy_threshold": 0, 00:14:55.174 "tls_version": 0, 00:14:55.174 "enable_ktls": false 00:14:55.174 } 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "method": "sock_impl_set_options", 00:14:55.174 "params": { 00:14:55.174 "impl_name": "uring", 00:14:55.174 "recv_buf_size": 2097152, 00:14:55.174 "send_buf_size": 2097152, 00:14:55.174 "enable_recv_pipe": true, 00:14:55.174 "enable_quickack": false, 00:14:55.174 "enable_placement_id": 0, 00:14:55.174 "enable_zerocopy_send_server": false, 00:14:55.174 "enable_zerocopy_send_client": false, 00:14:55.174 "zerocopy_threshold": 0, 00:14:55.174 "tls_version": 0, 00:14:55.174 "enable_ktls": false 00:14:55.174 } 00:14:55.174 } 00:14:55.174 ] 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "subsystem": "vmd", 00:14:55.174 "config": [] 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "subsystem": "accel", 00:14:55.174 "config": [ 00:14:55.174 { 00:14:55.174 "method": "accel_set_options", 00:14:55.174 "params": { 00:14:55.174 "small_cache_size": 128, 00:14:55.174 "large_cache_size": 16, 00:14:55.174 "task_count": 2048, 00:14:55.174 "sequence_count": 2048, 00:14:55.174 "buf_count": 2048 00:14:55.174 } 00:14:55.174 } 00:14:55.174 ] 00:14:55.174 }, 00:14:55.174 { 00:14:55.174 "subsystem": "bdev", 00:14:55.174 "config": [ 00:14:55.174 { 00:14:55.174 "method": "bdev_set_options", 00:14:55.174 "params": { 00:14:55.174 "bdev_io_pool_size": 65535, 00:14:55.174 "bdev_io_cache_size": 256, 00:14:55.175 "bdev_auto_examine": true, 00:14:55.175 "iobuf_small_cache_size": 128, 00:14:55.175 "iobuf_large_cache_size": 16 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_raid_set_options", 00:14:55.175 "params": { 00:14:55.175 "process_window_size_kb": 1024 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_iscsi_set_options", 00:14:55.175 "params": { 00:14:55.175 "timeout_sec": 30 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_nvme_set_options", 00:14:55.175 "params": { 00:14:55.175 "action_on_timeout": "none", 00:14:55.175 "timeout_us": 0, 00:14:55.175 "timeout_admin_us": 0, 00:14:55.175 "keep_alive_timeout_ms": 10000, 00:14:55.175 "arbitration_burst": 0, 00:14:55.175 "low_priority_weight": 0, 00:14:55.175 "medium_priority_weight": 0, 00:14:55.175 "high_priority_weight": 0, 00:14:55.175 "nvme_adminq_poll_period_us": 10000, 00:14:55.175 "nvme_ioq_poll_period_us": 0, 00:14:55.175 "io_queue_requests": 512, 00:14:55.175 "delay_cmd_submit": true, 00:14:55.175 "transport_retry_count": 4, 00:14:55.175 "bdev_retry_count": 3, 00:14:55.175 "transport_ack_timeout": 0, 00:14:55.175 "ctrlr_loss_timeout_sec": 0, 00:14:55.175 "reconnect_delay_sec": 0, 00:14:55.175 "fast_io_fail_timeout_sec": 0, 00:14:55.175 "disable_auto_failback": false, 00:14:55.175 "generate_uuids": false, 00:14:55.175 "transport_tos": 0, 00:14:55.175 "nvme_error_stat": false, 00:14:55.175 "rdma_srq_size": 0, 00:14:55.175 "io_path_stat": false, 00:14:55.175 "allow_accel_sequence": false, 00:14:55.175 "rdma_max_cq_size": 0, 00:14:55.175 "rdma_cm_event_timeout_ms": 0, 00:14:55.175 "dhchap_digests": [ 00:14:55.175 "sha256", 00:14:55.175 "sha384", 00:14:55.175 "sha512" 00:14:55.175 ], 00:14:55.175 "dhchap_dhgroups": [ 00:14:55.175 "null", 00:14:55.175 "ffdhe2048", 00:14:55.175 "ffdhe3072", 00:14:55.175 "ffdhe4096", 00:14:55.175 "ffdhe6144", 00:14:55.175 "ffdhe8192" 00:14:55.175 ] 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_nvme_attach_controller", 00:14:55.175 "params": { 00:14:55.175 "name": "nvme0", 00:14:55.175 "trtype": "TCP", 00:14:55.175 "adrfam": "IPv4", 00:14:55.175 "traddr": "10.0.0.2", 00:14:55.175 "trsvcid": "4420", 00:14:55.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.175 "prchk_reftag": false, 00:14:55.175 "prchk_guard": false, 00:14:55.175 "ctrlr_loss_timeout_sec": 0, 00:14:55.175 "reconnect_delay_sec": 0, 00:14:55.175 "fast_io_fail_timeout_sec": 0, 00:14:55.175 "psk": "key0", 00:14:55.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.175 "hdgst": false, 00:14:55.175 "ddgst": false 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_nvme_set_hotplug", 00:14:55.175 "params": { 00:14:55.175 "period_us": 100000, 00:14:55.175 "enable": false 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_enable_histogram", 00:14:55.175 "params": { 00:14:55.175 "name": "nvme0n1", 00:14:55.175 "enable": true 00:14:55.175 } 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "method": "bdev_wait_for_examine" 00:14:55.175 } 00:14:55.175 ] 00:14:55.175 }, 00:14:55.175 { 00:14:55.175 "subsystem": "nbd", 00:14:55.175 "config": [] 00:14:55.175 } 00:14:55.175 ] 00:14:55.175 }' 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 74119 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74119 ']' 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74119 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74119 00:14:55.175 killing process with pid 74119 00:14:55.175 Received shutdown signal, test time was about 1.000000 seconds 00:14:55.175 00:14:55.175 Latency(us) 00:14:55.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.175 =================================================================================================================== 00:14:55.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74119' 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74119 00:14:55.175 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74119 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 74087 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74087 ']' 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74087 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74087 00:14:55.434 killing process with pid 74087 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74087' 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74087 00:14:55.434 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74087 00:14:55.752 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:55.752 19:05:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.752 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.752 19:05:22 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:55.752 "subsystems": [ 00:14:55.752 { 00:14:55.752 "subsystem": "keyring", 00:14:55.752 "config": [ 00:14:55.752 { 00:14:55.752 "method": "keyring_file_add_key", 00:14:55.752 "params": { 00:14:55.752 "name": "key0", 00:14:55.752 "path": "/tmp/tmp.PIZi588o2F" 00:14:55.752 } 00:14:55.752 } 00:14:55.752 ] 00:14:55.752 }, 00:14:55.752 { 00:14:55.752 "subsystem": "iobuf", 00:14:55.752 "config": [ 00:14:55.752 { 00:14:55.752 "method": "iobuf_set_options", 00:14:55.752 "params": { 00:14:55.752 "small_pool_count": 8192, 00:14:55.752 "large_pool_count": 1024, 00:14:55.752 "small_bufsize": 8192, 00:14:55.752 "large_bufsize": 135168 00:14:55.752 } 00:14:55.752 } 00:14:55.752 ] 00:14:55.752 }, 00:14:55.752 { 00:14:55.752 "subsystem": "sock", 00:14:55.752 "config": [ 00:14:55.752 { 00:14:55.752 "method": "sock_set_default_impl", 00:14:55.752 "params": { 00:14:55.752 "impl_name": "uring" 00:14:55.752 } 00:14:55.752 }, 00:14:55.752 { 00:14:55.752 "method": "sock_impl_set_options", 00:14:55.752 "params": { 00:14:55.752 "impl_name": "ssl", 00:14:55.752 "recv_buf_size": 4096, 00:14:55.752 "send_buf_size": 4096, 00:14:55.752 "enable_recv_pipe": true, 00:14:55.752 "enable_quickack": false, 00:14:55.752 "enable_placement_id": 0, 00:14:55.752 "enable_zerocopy_send_server": true, 00:14:55.752 "enable_zerocopy_send_client": false, 00:14:55.752 "zerocopy_threshold": 0, 00:14:55.752 "tls_version": 0, 00:14:55.753 "enable_ktls": false 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "sock_impl_set_options", 00:14:55.753 "params": { 00:14:55.753 "impl_name": "posix", 00:14:55.753 "recv_buf_size": 2097152, 00:14:55.753 "send_buf_size": 2097152, 00:14:55.753 "enable_recv_pipe": true, 00:14:55.753 "enable_quickack": false, 00:14:55.753 "enable_placement_id": 0, 00:14:55.753 "enable_zerocopy_send_server": true, 00:14:55.753 "enable_zerocopy_send_client": false, 00:14:55.753 "zerocopy_threshold": 0, 00:14:55.753 "tls_version": 0, 00:14:55.753 "enable_ktls": false 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "sock_impl_set_options", 00:14:55.753 "params": { 00:14:55.753 "impl_name": "uring", 00:14:55.753 "recv_buf_size": 2097152, 00:14:55.753 "send_buf_size": 2097152, 00:14:55.753 "enable_recv_pipe": true, 00:14:55.753 "enable_quickack": false, 00:14:55.753 "enable_placement_id": 0, 00:14:55.753 "enable_zerocopy_send_server": false, 00:14:55.753 "enable_zerocopy_send_client": false, 00:14:55.753 "zerocopy_threshold": 0, 00:14:55.753 "tls_version": 0, 00:14:55.753 "enable_ktls": false 00:14:55.753 } 00:14:55.753 } 00:14:55.753 ] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "vmd", 00:14:55.753 "config": [] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "accel", 00:14:55.753 "config": [ 00:14:55.753 { 00:14:55.753 "method": "accel_set_options", 00:14:55.753 "params": { 00:14:55.753 "small_cache_size": 128, 00:14:55.753 "large_cache_size": 16, 00:14:55.753 "task_count": 2048, 00:14:55.753 "sequence_count": 2048, 00:14:55.753 "buf_count": 2048 00:14:55.753 } 00:14:55.753 } 00:14:55.753 ] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "bdev", 00:14:55.753 "config": [ 00:14:55.753 { 00:14:55.753 "method": "bdev_set_options", 00:14:55.753 "params": { 00:14:55.753 "bdev_io_pool_size": 65535, 00:14:55.753 "bdev_io_cache_size": 256, 00:14:55.753 "bdev_auto_examine": true, 00:14:55.753 "iobuf_small_cache_size": 128, 00:14:55.753 "iobuf_large_cache_size": 16 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_raid_set_options", 00:14:55.753 "params": { 00:14:55.753 "process_window_size_kb": 1024 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_iscsi_set_options", 00:14:55.753 "params": { 00:14:55.753 "timeout_sec": 30 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_nvme_set_options", 00:14:55.753 "params": { 00:14:55.753 "action_on_timeout": "none", 00:14:55.753 "timeout_us": 0, 00:14:55.753 "timeout_admin_us": 0, 00:14:55.753 "keep_alive_timeout_ms": 10000, 00:14:55.753 "arbitration_burst": 0, 00:14:55.753 "low_priority_weight": 0, 00:14:55.753 "medium_priority_weight": 0, 00:14:55.753 "high_priority_weight": 0, 00:14:55.753 "nvme_adminq_poll_period_us": 10000, 00:14:55.753 "nvme_ioq_poll_period_us": 0, 00:14:55.753 "io_queue_requests": 0, 00:14:55.753 "delay_cmd_submit": true, 00:14:55.753 "transport_retry_count": 4, 00:14:55.753 "bdev_retry_count": 3, 00:14:55.753 "transport_ack_timeout": 0, 00:14:55.753 "ctrlr_loss_timeout_sec": 0, 00:14:55.753 "reconnect_delay_sec": 0, 00:14:55.753 "fast_io_fail_timeout_sec": 0, 00:14:55.753 "disable_auto_failback": false, 00:14:55.753 "generate_uuids": false, 00:14:55.753 "transport_tos": 0, 00:14:55.753 "nvme_error_stat": false, 00:14:55.753 "rdma_srq_size": 0, 00:14:55.753 "io_path_stat": false, 00:14:55.753 "allow_accel_sequence": false, 00:14:55.753 "rdma_max_cq_size": 0, 00:14:55.753 "rdma_cm_event_timeout_ms": 0, 00:14:55.753 "dhchap_digests": [ 00:14:55.753 "sha256", 00:14:55.753 "sha384", 00:14:55.753 "sha512" 00:14:55.753 ], 00:14:55.753 "dhchap_dhgroups": [ 00:14:55.753 "null", 00:14:55.753 "ffdhe2048", 00:14:55.753 "ffdhe3072", 00:14:55.753 "ffdhe4096", 00:14:55.753 "ffdhe6144", 00:14:55.753 "ffdhe8192" 00:14:55.753 ] 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_nvme_set_hotplug", 00:14:55.753 "params": { 00:14:55.753 "period_us": 100000, 00:14:55.753 "enable": false 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_malloc_create", 00:14:55.753 "params": { 00:14:55.753 "name": "malloc0", 00:14:55.753 "num_blocks": 8192, 00:14:55.753 "block_size": 4096, 00:14:55.753 "physical_block_size": 4096, 00:14:55.753 "uuid": "e1a36a12-3fd2-4c9b-af3f-b086058865ad", 00:14:55.753 "optimal_io_boundary": 0 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "bdev_wait_for_examine" 00:14:55.753 } 00:14:55.753 ] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "nbd", 00:14:55.753 "config": [] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "scheduler", 00:14:55.753 "config": [ 00:14:55.753 { 00:14:55.753 "method": "framework_set_scheduler", 00:14:55.753 "params": { 00:14:55.753 "name": "static" 00:14:55.753 } 00:14:55.753 } 00:14:55.753 ] 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "subsystem": "nvmf", 00:14:55.753 "config": [ 00:14:55.753 { 00:14:55.753 "method": "nvmf_set_config", 00:14:55.753 "params": { 00:14:55.753 "discovery_filter": "match_any", 00:14:55.753 "admin_cmd_passthru": { 00:14:55.753 "identify_ctrlr": false 00:14:55.753 } 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "nvmf_set_max_subsystems", 00:14:55.753 "params": { 00:14:55.753 "max_subsystems": 1024 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "nvmf_set_crdt", 00:14:55.753 "params": { 00:14:55.753 "crdt1": 0, 00:14:55.753 "crdt2": 0, 00:14:55.753 "crdt3": 0 00:14:55.753 } 00:14:55.753 }, 00:14:55.753 { 00:14:55.753 "method": "nvmf_create_transport", 00:14:55.753 "params": { 00:14:55.753 "trtype": "TCP", 00:14:55.753 "max_queue_depth": 128, 00:14:55.753 "max_io_qpairs_per_ctrlr": 127, 00:14:55.753 "in_capsule_data_size": 4096, 00:14:55.753 "max_io_size": 131072, 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.753 00:14:55.754 "io_unit_size": 131072, 00:14:55.754 "max_aq_depth": 128, 00:14:55.754 "num_shared_buffers": 511, 00:14:55.754 "buf_cache_size": 4294967295, 00:14:55.754 "dif_insert_or_strip": false, 00:14:55.754 "zcopy": false, 00:14:55.754 "c2h_success": false, 00:14:55.754 "sock_priority": 0, 00:14:55.754 "abort_timeout_sec": 1, 00:14:55.754 "ack_timeout": 0, 00:14:55.754 "data_wr_pool_size": 0 00:14:55.754 } 00:14:55.754 }, 00:14:55.754 { 00:14:55.754 "method": "nvmf_create_subsystem", 00:14:55.754 "params": { 00:14:55.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.754 "allow_any_host": false, 00:14:55.754 "serial_number": "00000000000000000000", 00:14:55.754 "model_number": "SPDK bdev Controller", 00:14:55.754 "max_namespaces": 32, 00:14:55.754 "min_cntlid": 1, 00:14:55.754 "max_cntlid": 65519, 00:14:55.754 "ana_reporting": false 00:14:55.754 } 00:14:55.754 }, 00:14:55.754 { 00:14:55.754 "method": "nvmf_subsystem_add_host", 00:14:55.754 "params": { 00:14:55.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.754 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.754 "psk": "key0" 00:14:55.754 } 00:14:55.754 }, 00:14:55.754 { 00:14:55.754 "method": "nvmf_subsystem_add_ns", 00:14:55.754 "params": { 00:14:55.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.754 "namespace": { 00:14:55.754 "nsid": 1, 00:14:55.754 "bdev_name": "malloc0", 00:14:55.754 "nguid": "E1A36A123FD24C9BAF3FB086058865AD", 00:14:55.754 "uuid": "e1a36a12-3fd2-4c9b-af3f-b086058865ad", 00:14:55.754 "no_auto_visible": false 00:14:55.754 } 00:14:55.754 } 00:14:55.754 }, 00:14:55.754 { 00:14:55.754 "method": "nvmf_subsystem_add_listener", 00:14:55.754 "params": { 00:14:55.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.754 "listen_address": { 00:14:55.754 "trtype": "TCP", 00:14:55.754 "adrfam": "IPv4", 00:14:55.754 "traddr": "10.0.0.2", 00:14:55.754 "trsvcid": "4420" 00:14:55.754 }, 00:14:55.754 "secure_channel": false, 00:14:55.754 "sock_impl": "ssl" 00:14:55.754 } 00:14:55.754 } 00:14:55.754 ] 00:14:55.754 } 00:14:55.754 ] 00:14:55.754 }' 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74185 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74185 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74185 ']' 00:14:55.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.754 19:05:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.754 [2024-07-15 19:05:22.941669] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:55.754 [2024-07-15 19:05:22.941763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.023 [2024-07-15 19:05:23.082169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.023 [2024-07-15 19:05:23.196536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.023 [2024-07-15 19:05:23.196593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.023 [2024-07-15 19:05:23.196621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.024 [2024-07-15 19:05:23.196629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.024 [2024-07-15 19:05:23.196637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.024 [2024-07-15 19:05:23.196728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.282 [2024-07-15 19:05:23.364317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.282 [2024-07-15 19:05:23.446267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.282 [2024-07-15 19:05:23.478208] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:56.282 [2024-07-15 19:05:23.478452] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.850 19:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.850 19:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.850 19:05:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.850 19:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.850 19:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74217 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74217 /var/tmp/bdevperf.sock 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74217 ']' 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.850 19:05:24 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:56.850 "subsystems": [ 00:14:56.850 { 00:14:56.850 "subsystem": "keyring", 00:14:56.850 "config": [ 00:14:56.850 { 00:14:56.850 "method": "keyring_file_add_key", 00:14:56.850 "params": { 00:14:56.850 "name": "key0", 00:14:56.850 "path": "/tmp/tmp.PIZi588o2F" 00:14:56.850 } 00:14:56.850 } 00:14:56.850 ] 00:14:56.850 }, 00:14:56.850 { 00:14:56.851 "subsystem": "iobuf", 00:14:56.851 "config": [ 00:14:56.851 { 00:14:56.851 "method": "iobuf_set_options", 00:14:56.851 "params": { 00:14:56.851 "small_pool_count": 8192, 00:14:56.851 "large_pool_count": 1024, 00:14:56.851 "small_bufsize": 8192, 00:14:56.851 "large_bufsize": 135168 00:14:56.851 } 00:14:56.851 } 00:14:56.851 ] 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "subsystem": "sock", 00:14:56.851 "config": [ 00:14:56.851 { 00:14:56.851 "method": "sock_set_default_impl", 00:14:56.851 "params": { 00:14:56.851 "impl_name": "uring" 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "sock_impl_set_options", 00:14:56.851 "params": { 00:14:56.851 "impl_name": "ssl", 00:14:56.851 "recv_buf_size": 4096, 00:14:56.851 "send_buf_size": 4096, 00:14:56.851 "enable_recv_pipe": true, 00:14:56.851 "enable_quickack": false, 00:14:56.851 "enable_placement_id": 0, 00:14:56.851 "enable_zerocopy_send_server": true, 00:14:56.851 "enable_zerocopy_send_client": false, 00:14:56.851 "zerocopy_threshold": 0, 00:14:56.851 "tls_version": 0, 00:14:56.851 "enable_ktls": false 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "sock_impl_set_options", 00:14:56.851 "params": { 00:14:56.851 "impl_name": "posix", 00:14:56.851 "recv_buf_size": 2097152, 00:14:56.851 "send_buf_size": 2097152, 00:14:56.851 "enable_recv_pipe": true, 00:14:56.851 "enable_quickack": false, 00:14:56.851 "enable_placement_id": 0, 00:14:56.851 "enable_zerocopy_send_server": true, 00:14:56.851 "enable_zerocopy_send_client": false, 00:14:56.851 "zerocopy_threshold": 0, 00:14:56.851 "tls_version": 0, 00:14:56.851 "enable_ktls": false 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "sock_impl_set_options", 00:14:56.851 "params": { 00:14:56.851 "impl_name": "uring", 00:14:56.851 "recv_buf_size": 2097152, 00:14:56.851 "send_buf_size": 2097152, 00:14:56.851 "enable_recv_pipe": true, 00:14:56.851 "enable_quickack": false, 00:14:56.851 "enable_placement_id": 0, 00:14:56.851 "enable_zerocopy_send_server": false, 00:14:56.851 "enable_zerocopy_send_client": false, 00:14:56.851 "zerocopy_threshold": 0, 00:14:56.851 "tls_version": 0, 00:14:56.851 "enable_ktls": false 00:14:56.851 } 00:14:56.851 } 00:14:56.851 ] 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "subsystem": "vmd", 00:14:56.851 "config": [] 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "subsystem": "accel", 00:14:56.851 "config": [ 00:14:56.851 { 00:14:56.851 "method": "accel_set_options", 00:14:56.851 "params": { 00:14:56.851 "small_cache_size": 128, 00:14:56.851 "large_cache_size": 16, 00:14:56.851 "task_count": 2048, 00:14:56.851 "sequence_count": 2048, 00:14:56.851 "buf_count": 2048 00:14:56.851 } 00:14:56.851 } 00:14:56.851 ] 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "subsystem": "bdev", 00:14:56.851 "config": [ 00:14:56.851 { 00:14:56.851 "method": "bdev_set_options", 00:14:56.851 "params": { 00:14:56.851 "bdev_io_pool_size": 65535, 00:14:56.851 "bdev_io_cache_size": 256, 00:14:56.851 "bdev_auto_examine": true, 00:14:56.851 "iobuf_small_cache_size": 128, 00:14:56.851 "iobuf_large_cache_size": 16 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "bdev_raid_set_options", 00:14:56.851 "params": { 00:14:56.851 "process_window_size_kb": 1024 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "bdev_iscsi_set_options", 00:14:56.851 "params": { 00:14:56.851 "timeout_sec": 30 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "bdev_nvme_set_options", 00:14:56.851 "params": { 00:14:56.851 "action_on_timeout": "none", 00:14:56.851 "timeout_us": 0, 00:14:56.851 "timeout_admin_us": 0, 00:14:56.851 "keep_alive_timeout_ms": 10000, 00:14:56.851 "arbitration_burst": 0, 00:14:56.851 "low_priority_weight": 0, 00:14:56.851 "medium_priority_weight": 0, 00:14:56.851 "high_priority_weight": 0, 00:14:56.851 "nvme_adminq_poll_period_us": 10000, 00:14:56.851 "nvme_ioq_poll_period_us": 0, 00:14:56.851 "io_queue_requests": 512, 00:14:56.851 "delay_cmd_submit": true, 00:14:56.851 "transport_retry_count": 4, 00:14:56.851 "bdev_retry_count": 3, 00:14:56.851 "transport_ack_timeout": 0, 00:14:56.851 "ctrlr_loss_timeout_sec": 0, 00:14:56.851 "reconnect_delay_sec": 0, 00:14:56.851 "fast_io_fail_timeout_sec": 0, 00:14:56.851 "disable_auto_failback": false, 00:14:56.851 "generate_uuids": false, 00:14:56.851 "transport_tos": 0, 00:14:56.851 "nvme_error_stat": false, 00:14:56.851 "rdma_srq_size": 0, 00:14:56.851 "io_path_stat": false, 00:14:56.851 "allow_accel_sequence": false, 00:14:56.851 "rdma_max_cq_size": 0, 00:14:56.851 "rdma_cm_event_timeout_ms": 0, 00:14:56.851 "dhchap_digests": [ 00:14:56.851 "sha256", 00:14:56.851 "sha384", 00:14:56.851 "sha512" 00:14:56.851 ], 00:14:56.851 "dhchap_dhgroups": [ 00:14:56.851 "null", 00:14:56.851 "ffdhe2048", 00:14:56.851 "ffdhe3072", 00:14:56.851 "ffdhe4096", 00:14:56.851 "ffdhe6144", 00:14:56.851 "ffdhe8192" 00:14:56.851 ] 00:14:56.851 } 00:14:56.851 }, 00:14:56.851 { 00:14:56.851 "method": "bdev_nvme_attach_controller", 00:14:56.851 "params": { 00:14:56.851 "name": "nvme0", 00:14:56.851 "trtype": "TCP", 00:14:56.851 "adrfam": "IPv4", 00:14:56.851 "traddr": "10.0.0.2", 00:14:56.851 "trsvcid": "4420", 00:14:56.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.851 "prchk_reftag": false, 00:14:56.852 "prchk_guard": false, 00:14:56.852 "ctrlr_loss_timeout_sec": 0, 00:14:56.852 "reconnect_delay_sec": 0, 00:14:56.852 "fast_io_fail_timeout_sec": 0, 00:14:56.852 "psk": "key0", 00:14:56.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.852 "hdgst": false, 00:14:56.852 "ddgst": false 00:14:56.852 } 00:14:56.852 }, 00:14:56.852 { 00:14:56.852 "method": "bdev_nvme_set_hotplug", 00:14:56.852 "params": { 00:14:56.852 "period_us": 100000, 00:14:56.852 "enable": false 00:14:56.852 } 00:14:56.852 }, 00:14:56.852 { 00:14:56.852 "method": "bdev_enable_histogram", 00:14:56.852 "params": { 00:14:56.852 "name": "nvme0n1", 00:14:56.852 "enable": true 00:14:56.852 } 00:14:56.852 }, 00:14:56.852 { 00:14:56.852 "method": "bdev_wait_for_examine" 00:14:56.852 } 00:14:56.852 ] 00:14:56.852 }, 00:14:56.852 { 00:14:56.852 "subsystem": "nbd", 00:14:56.852 "config": [] 00:14:56.852 } 00:14:56.852 ] 00:14:56.852 }' 00:14:56.852 [2024-07-15 19:05:24.055557] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:14:56.852 [2024-07-15 19:05:24.055645] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74217 ] 00:14:57.110 [2024-07-15 19:05:24.183713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.110 [2024-07-15 19:05:24.294804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.368 [2024-07-15 19:05:24.434228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.368 [2024-07-15 19:05:24.483577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.935 19:05:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.935 19:05:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:57.935 19:05:25 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:57.935 19:05:25 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:58.194 19:05:25 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.194 19:05:25 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.452 Running I/O for 1 seconds... 00:14:59.388 00:14:59.388 Latency(us) 00:14:59.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.388 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.388 Verification LBA range: start 0x0 length 0x2000 00:14:59.388 nvme0n1 : 1.02 3824.55 14.94 0.00 0.00 33054.42 5719.51 24427.05 00:14:59.388 =================================================================================================================== 00:14:59.388 Total : 3824.55 14.94 0.00 0.00 33054.42 5719.51 24427.05 00:14:59.388 0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.388 nvmf_trace.0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74217 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74217 ']' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74217 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74217 00:14:59.388 killing process with pid 74217 00:14:59.388 Received shutdown signal, test time was about 1.000000 seconds 00:14:59.388 00:14:59.388 Latency(us) 00:14:59.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.388 =================================================================================================================== 00:14:59.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74217' 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74217 00:14:59.388 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74217 00:14:59.647 19:05:26 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:59.647 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.647 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:59.906 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.907 rmmod nvme_tcp 00:14:59.907 rmmod nvme_fabrics 00:14:59.907 rmmod nvme_keyring 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74185 ']' 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74185 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74185 ']' 00:14:59.907 19:05:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74185 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74185 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.907 killing process with pid 74185 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74185' 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74185 00:14:59.907 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74185 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.nh6uNRQpfo /tmp/tmp.dLiXOV8fjv /tmp/tmp.PIZi588o2F 00:15:00.166 ************************************ 00:15:00.166 END TEST nvmf_tls 00:15:00.166 ************************************ 00:15:00.166 00:15:00.166 real 1m26.823s 00:15:00.166 user 2m17.857s 00:15:00.166 sys 0m28.076s 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.166 19:05:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.166 19:05:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.166 19:05:27 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:00.166 19:05:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.166 19:05:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.166 19:05:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.166 ************************************ 00:15:00.166 START TEST nvmf_fips 00:15:00.166 ************************************ 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:00.166 * Looking for test storage... 00:15:00.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.166 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.425 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:00.426 Error setting digest 00:15:00.426 00E2993AF57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:00.426 00E2993AF57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:00.426 Cannot find device "nvmf_tgt_br" 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.426 Cannot find device "nvmf_tgt_br2" 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:00.426 Cannot find device "nvmf_tgt_br" 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:00.426 Cannot find device "nvmf_tgt_br2" 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:00.426 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.683 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:00.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:00.684 00:15:00.684 --- 10.0.0.2 ping statistics --- 00:15:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.684 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:00.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:00.684 00:15:00.684 --- 10.0.0.3 ping statistics --- 00:15:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.684 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:00.684 00:15:00.684 --- 10.0.0.1 ping statistics --- 00:15:00.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.684 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.684 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74480 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74480 00:15:00.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74480 ']' 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.942 19:05:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.942 [2024-07-15 19:05:28.082419] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:00.942 [2024-07-15 19:05:28.082543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.942 [2024-07-15 19:05:28.224249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.201 [2024-07-15 19:05:28.319514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.202 [2024-07-15 19:05:28.319586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.202 [2024-07-15 19:05:28.319601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.202 [2024-07-15 19:05:28.319612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.202 [2024-07-15 19:05:28.319621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.202 [2024-07-15 19:05:28.319652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.202 [2024-07-15 19:05:28.378133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.767 19:05:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.768 19:05:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:01.768 19:05:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.768 19:05:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.768 19:05:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:01.768 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.026 [2024-07-15 19:05:29.249840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.026 [2024-07-15 19:05:29.265784] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.026 [2024-07-15 19:05:29.266021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.026 [2024-07-15 19:05:29.296952] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.026 malloc0 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74514 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74514 /var/tmp/bdevperf.sock 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74514 ']' 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.285 19:05:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:02.285 [2024-07-15 19:05:29.409287] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:02.285 [2024-07-15 19:05:29.409638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 00:15:02.285 [2024-07-15 19:05:29.551943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.544 [2024-07-15 19:05:29.687795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.544 [2024-07-15 19:05:29.746420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.184 19:05:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.184 19:05:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:03.184 19:05:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:03.444 [2024-07-15 19:05:30.563274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.444 [2024-07-15 19:05:30.563404] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:03.444 TLSTESTn1 00:15:03.444 19:05:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.701 Running I/O for 10 seconds... 00:15:13.675 00:15:13.675 Latency(us) 00:15:13.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.675 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.675 Verification LBA range: start 0x0 length 0x2000 00:15:13.675 TLSTESTn1 : 10.02 4100.17 16.02 0.00 0.00 31157.18 6851.49 26810.18 00:15:13.675 =================================================================================================================== 00:15:13.675 Total : 4100.17 16.02 0.00 0.00 31157.18 6851.49 26810.18 00:15:13.675 0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.675 nvmf_trace.0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74514 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74514 ']' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74514 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74514 00:15:13.675 killing process with pid 74514 00:15:13.675 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.675 00:15:13.675 Latency(us) 00:15:13.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.675 =================================================================================================================== 00:15:13.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74514' 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74514 00:15:13.675 [2024-07-15 19:05:40.918260] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:13.675 19:05:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74514 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.933 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.933 rmmod nvme_tcp 00:15:14.191 rmmod nvme_fabrics 00:15:14.191 rmmod nvme_keyring 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74480 ']' 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74480 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74480 ']' 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74480 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74480 00:15:14.191 killing process with pid 74480 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74480' 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74480 00:15:14.191 [2024-07-15 19:05:41.298632] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:14.191 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74480 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:14.448 ************************************ 00:15:14.448 END TEST nvmf_fips 00:15:14.448 ************************************ 00:15:14.448 00:15:14.448 real 0m14.216s 00:15:14.448 user 0m19.533s 00:15:14.448 sys 0m5.628s 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.448 19:05:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:14.448 19:05:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:14.448 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:14.448 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:14.448 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:14.448 19:05:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.448 19:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.448 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:14.448 19:05:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.449 19:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.449 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:14.449 19:05:41 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:14.449 19:05:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:14.449 19:05:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.449 19:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.449 ************************************ 00:15:14.449 START TEST nvmf_identify 00:15:14.449 ************************************ 00:15:14.449 19:05:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:14.707 * Looking for test storage... 00:15:14.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:14.707 Cannot find device "nvmf_tgt_br" 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.707 Cannot find device "nvmf_tgt_br2" 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:14.707 Cannot find device "nvmf_tgt_br" 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:14.707 Cannot find device "nvmf_tgt_br2" 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:14.707 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:14.708 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:14.966 19:05:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:14.966 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:14.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:14.967 00:15:14.967 --- 10.0.0.2 ping statistics --- 00:15:14.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.967 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:14.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:14.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:14.967 00:15:14.967 --- 10.0.0.3 ping statistics --- 00:15:14.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.967 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:14.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:14.967 00:15:14.967 --- 10.0.0.1 ping statistics --- 00:15:14.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.967 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74862 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74862 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74862 ']' 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.967 19:05:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:14.967 [2024-07-15 19:05:42.162942] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:14.967 [2024-07-15 19:05:42.163015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.225 [2024-07-15 19:05:42.301671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.225 [2024-07-15 19:05:42.430100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.225 [2024-07-15 19:05:42.430383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.225 [2024-07-15 19:05:42.430671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.225 [2024-07-15 19:05:42.430841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.225 [2024-07-15 19:05:42.430952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.225 [2024-07-15 19:05:42.431154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.225 [2024-07-15 19:05:42.431294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.225 [2024-07-15 19:05:42.431358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.225 [2024-07-15 19:05:42.431361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.225 [2024-07-15 19:05:42.489714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 [2024-07-15 19:05:43.153996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 Malloc0 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 [2024-07-15 19:05:43.275261] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 [ 00:15:16.167 { 00:15:16.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:16.167 "subtype": "Discovery", 00:15:16.167 "listen_addresses": [ 00:15:16.167 { 00:15:16.167 "trtype": "TCP", 00:15:16.167 "adrfam": "IPv4", 00:15:16.167 "traddr": "10.0.0.2", 00:15:16.167 "trsvcid": "4420" 00:15:16.167 } 00:15:16.167 ], 00:15:16.167 "allow_any_host": true, 00:15:16.167 "hosts": [] 00:15:16.167 }, 00:15:16.167 { 00:15:16.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.167 "subtype": "NVMe", 00:15:16.167 "listen_addresses": [ 00:15:16.167 { 00:15:16.167 "trtype": "TCP", 00:15:16.167 "adrfam": "IPv4", 00:15:16.167 "traddr": "10.0.0.2", 00:15:16.167 "trsvcid": "4420" 00:15:16.167 } 00:15:16.167 ], 00:15:16.167 "allow_any_host": true, 00:15:16.167 "hosts": [], 00:15:16.167 "serial_number": "SPDK00000000000001", 00:15:16.167 "model_number": "SPDK bdev Controller", 00:15:16.167 "max_namespaces": 32, 00:15:16.167 "min_cntlid": 1, 00:15:16.167 "max_cntlid": 65519, 00:15:16.167 "namespaces": [ 00:15:16.167 { 00:15:16.167 "nsid": 1, 00:15:16.167 "bdev_name": "Malloc0", 00:15:16.167 "name": "Malloc0", 00:15:16.167 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:16.167 "eui64": "ABCDEF0123456789", 00:15:16.167 "uuid": "d90dd946-cc29-43e0-bb56-35f0771f7a50" 00:15:16.167 } 00:15:16.167 ] 00:15:16.167 } 00:15:16.167 ] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.167 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:16.167 [2024-07-15 19:05:43.332587] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:16.167 [2024-07-15 19:05:43.332822] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:15:16.432 [2024-07-15 19:05:43.474794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:16.432 [2024-07-15 19:05:43.474886] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:16.432 [2024-07-15 19:05:43.474894] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:16.432 [2024-07-15 19:05:43.474908] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:16.432 [2024-07-15 19:05:43.474917] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:16.432 [2024-07-15 19:05:43.475071] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:16.432 [2024-07-15 19:05:43.475127] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18ff510 0 00:15:16.432 [2024-07-15 19:05:43.482542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:16.432 [2024-07-15 19:05:43.482570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:16.432 [2024-07-15 19:05:43.482592] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:16.432 [2024-07-15 19:05:43.482596] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:16.432 [2024-07-15 19:05:43.482652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.482660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.482664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.432 [2024-07-15 19:05:43.482704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:16.432 [2024-07-15 19:05:43.482747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.432 [2024-07-15 19:05:43.489552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.432 [2024-07-15 19:05:43.489577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.432 [2024-07-15 19:05:43.489583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.432 [2024-07-15 19:05:43.489604] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:16.432 [2024-07-15 19:05:43.489614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:16.432 [2024-07-15 19:05:43.489621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:16.432 [2024-07-15 19:05:43.489641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.432 [2024-07-15 19:05:43.489662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.432 [2024-07-15 19:05:43.489692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.432 [2024-07-15 19:05:43.489762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.432 [2024-07-15 19:05:43.489769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.432 [2024-07-15 19:05:43.489773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.432 [2024-07-15 19:05:43.489784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:16.432 [2024-07-15 19:05:43.489792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:16.432 [2024-07-15 19:05:43.489800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.432 [2024-07-15 19:05:43.489817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.432 [2024-07-15 19:05:43.489837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.432 [2024-07-15 19:05:43.489881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.432 [2024-07-15 19:05:43.489888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.432 [2024-07-15 19:05:43.489892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.432 [2024-07-15 19:05:43.489904] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:16.432 [2024-07-15 19:05:43.489914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.432 [2024-07-15 19:05:43.489922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.489931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.432 [2024-07-15 19:05:43.489938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.432 [2024-07-15 19:05:43.489957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.432 [2024-07-15 19:05:43.490004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.432 [2024-07-15 19:05:43.490011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.432 [2024-07-15 19:05:43.490015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.490019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.432 [2024-07-15 19:05:43.490025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.432 [2024-07-15 19:05:43.490036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.490041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.490045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.432 [2024-07-15 19:05:43.490053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.432 [2024-07-15 19:05:43.490071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.432 [2024-07-15 19:05:43.490118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.432 [2024-07-15 19:05:43.490125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.432 [2024-07-15 19:05:43.490129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.432 [2024-07-15 19:05:43.490133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.432 [2024-07-15 19:05:43.490139] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:16.432 [2024-07-15 19:05:43.490145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:16.432 [2024-07-15 19:05:43.490153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.432 [2024-07-15 19:05:43.490259] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:16.433 [2024-07-15 19:05:43.490265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.433 [2024-07-15 19:05:43.490284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.433 [2024-07-15 19:05:43.490327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.433 [2024-07-15 19:05:43.490372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.490379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.490383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.490393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.433 [2024-07-15 19:05:43.490403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.433 [2024-07-15 19:05:43.490438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.433 [2024-07-15 19:05:43.490485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.490492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.490496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.490523] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.433 [2024-07-15 19:05:43.490528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.490537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:16.433 [2024-07-15 19:05:43.490549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.490561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.433 [2024-07-15 19:05:43.490595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.433 [2024-07-15 19:05:43.490681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.433 [2024-07-15 19:05:43.490689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.433 [2024-07-15 19:05:43.490693] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490698] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ff510): datao=0, datal=4096, cccid=0 00:15:16.433 [2024-07-15 19:05:43.490703] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1961f00) on tqpair(0x18ff510): expected_datao=0, payload_size=4096 00:15:16.433 [2024-07-15 19:05:43.490708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490722] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.490738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.490742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.490756] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:16.433 [2024-07-15 19:05:43.490762] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:16.433 [2024-07-15 19:05:43.490767] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:16.433 [2024-07-15 19:05:43.490773] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:16.433 [2024-07-15 19:05:43.490778] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:16.433 [2024-07-15 19:05:43.490783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.490792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.490801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.433 [2024-07-15 19:05:43.490845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.433 [2024-07-15 19:05:43.490899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.490906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.490910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.490923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.433 [2024-07-15 19:05:43.490946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.433 [2024-07-15 19:05:43.490967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.490981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.433 [2024-07-15 19:05:43.490988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.490996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.491002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.433 [2024-07-15 19:05:43.491007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.491021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.433 [2024-07-15 19:05:43.491030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.491042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.433 [2024-07-15 19:05:43.491063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1961f00, cid 0, qid 0 00:15:16.433 [2024-07-15 19:05:43.491071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962080, cid 1, qid 0 00:15:16.433 [2024-07-15 19:05:43.491076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962200, cid 2, qid 0 00:15:16.433 [2024-07-15 19:05:43.491081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.433 [2024-07-15 19:05:43.491086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962500, cid 4, qid 0 00:15:16.433 [2024-07-15 19:05:43.491173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.491180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.491184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962500) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.491194] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:16.433 [2024-07-15 19:05:43.491204] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:16.433 [2024-07-15 19:05:43.491217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ff510) 00:15:16.433 [2024-07-15 19:05:43.491229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.433 [2024-07-15 19:05:43.491248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962500, cid 4, qid 0 00:15:16.433 [2024-07-15 19:05:43.491310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.433 [2024-07-15 19:05:43.491317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.433 [2024-07-15 19:05:43.491321] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491325] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ff510): datao=0, datal=4096, cccid=4 00:15:16.433 [2024-07-15 19:05:43.491330] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962500) on tqpair(0x18ff510): expected_datao=0, payload_size=4096 00:15:16.433 [2024-07-15 19:05:43.491335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491342] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491347] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.433 [2024-07-15 19:05:43.491362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.433 [2024-07-15 19:05:43.491366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962500) on tqpair=0x18ff510 00:15:16.433 [2024-07-15 19:05:43.491385] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:16.433 [2024-07-15 19:05:43.491417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.433 [2024-07-15 19:05:43.491423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ff510) 00:15:16.434 [2024-07-15 19:05:43.491431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.434 [2024-07-15 19:05:43.491439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ff510) 00:15:16.434 [2024-07-15 19:05:43.491454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.434 [2024-07-15 19:05:43.491479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962500, cid 4, qid 0 00:15:16.434 [2024-07-15 19:05:43.491487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962680, cid 5, qid 0 00:15:16.434 [2024-07-15 19:05:43.491604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.434 [2024-07-15 19:05:43.491613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.434 [2024-07-15 19:05:43.491617] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ff510): datao=0, datal=1024, cccid=4 00:15:16.434 [2024-07-15 19:05:43.491626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962500) on tqpair(0x18ff510): expected_datao=0, payload_size=1024 00:15:16.434 [2024-07-15 19:05:43.491631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491638] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.434 [2024-07-15 19:05:43.491654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.434 [2024-07-15 19:05:43.491658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962680) on tqpair=0x18ff510 00:15:16.434 [2024-07-15 19:05:43.491682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.434 [2024-07-15 19:05:43.491690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.434 [2024-07-15 19:05:43.491694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962500) on tqpair=0x18ff510 00:15:16.434 [2024-07-15 19:05:43.491712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ff510) 00:15:16.434 [2024-07-15 19:05:43.491726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.434 [2024-07-15 19:05:43.491752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962500, cid 4, qid 0 00:15:16.434 [2024-07-15 19:05:43.491819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.434 [2024-07-15 19:05:43.491826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.434 [2024-07-15 19:05:43.491830] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491834] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ff510): datao=0, datal=3072, cccid=4 00:15:16.434 [2024-07-15 19:05:43.491839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962500) on tqpair(0x18ff510): expected_datao=0, payload_size=3072 00:15:16.434 [2024-07-15 19:05:43.491843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491851] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491855] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.434 [2024-07-15 19:05:43.491870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.434 [2024-07-15 19:05:43.491874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962500) on tqpair=0x18ff510 00:15:16.434 [2024-07-15 19:05:43.491889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.491894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ff510) 00:15:16.434 [2024-07-15 19:05:43.491901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.434 [2024-07-15 19:05:43.491925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962500, cid 4, qid 0 00:15:16.434 [2024-07-15 19:05:43.491987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.434 [2024-07-15 19:05:43.491994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.434 [2024-07-15 19:05:43.491998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.492002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ff510): datao=0, datal=8, cccid=4 00:15:16.434 [2024-07-15 19:05:43.492007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1962500) on tqpair(0x18ff510): expected_datao=0, payload_size=8 00:15:16.434 [2024-07-15 19:05:43.492012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.492019] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.492022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.492038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.434 [2024-07-15 19:05:43.492046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.434 [2024-07-15 19:05:43.492050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.434 [2024-07-15 19:05:43.492054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962500) on tqpair=0x18ff510 00:15:16.434 ===================================================== 00:15:16.434 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:16.434 ===================================================== 00:15:16.434 Controller Capabilities/Features 00:15:16.434 ================================ 00:15:16.434 Vendor ID: 0000 00:15:16.434 Subsystem Vendor ID: 0000 00:15:16.434 Serial Number: .................... 00:15:16.434 Model Number: ........................................ 00:15:16.434 Firmware Version: 24.09 00:15:16.434 Recommended Arb Burst: 0 00:15:16.434 IEEE OUI Identifier: 00 00 00 00:15:16.434 Multi-path I/O 00:15:16.434 May have multiple subsystem ports: No 00:15:16.434 May have multiple controllers: No 00:15:16.434 Associated with SR-IOV VF: No 00:15:16.434 Max Data Transfer Size: 131072 00:15:16.434 Max Number of Namespaces: 0 00:15:16.434 Max Number of I/O Queues: 1024 00:15:16.434 NVMe Specification Version (VS): 1.3 00:15:16.434 NVMe Specification Version (Identify): 1.3 00:15:16.434 Maximum Queue Entries: 128 00:15:16.434 Contiguous Queues Required: Yes 00:15:16.434 Arbitration Mechanisms Supported 00:15:16.434 Weighted Round Robin: Not Supported 00:15:16.434 Vendor Specific: Not Supported 00:15:16.434 Reset Timeout: 15000 ms 00:15:16.434 Doorbell Stride: 4 bytes 00:15:16.434 NVM Subsystem Reset: Not Supported 00:15:16.434 Command Sets Supported 00:15:16.434 NVM Command Set: Supported 00:15:16.434 Boot Partition: Not Supported 00:15:16.434 Memory Page Size Minimum: 4096 bytes 00:15:16.434 Memory Page Size Maximum: 4096 bytes 00:15:16.434 Persistent Memory Region: Not Supported 00:15:16.434 Optional Asynchronous Events Supported 00:15:16.434 Namespace Attribute Notices: Not Supported 00:15:16.434 Firmware Activation Notices: Not Supported 00:15:16.434 ANA Change Notices: Not Supported 00:15:16.434 PLE Aggregate Log Change Notices: Not Supported 00:15:16.434 LBA Status Info Alert Notices: Not Supported 00:15:16.434 EGE Aggregate Log Change Notices: Not Supported 00:15:16.434 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.434 Zone Descriptor Change Notices: Not Supported 00:15:16.434 Discovery Log Change Notices: Supported 00:15:16.434 Controller Attributes 00:15:16.434 128-bit Host Identifier: Not Supported 00:15:16.434 Non-Operational Permissive Mode: Not Supported 00:15:16.434 NVM Sets: Not Supported 00:15:16.434 Read Recovery Levels: Not Supported 00:15:16.434 Endurance Groups: Not Supported 00:15:16.434 Predictable Latency Mode: Not Supported 00:15:16.434 Traffic Based Keep ALive: Not Supported 00:15:16.434 Namespace Granularity: Not Supported 00:15:16.434 SQ Associations: Not Supported 00:15:16.434 UUID List: Not Supported 00:15:16.434 Multi-Domain Subsystem: Not Supported 00:15:16.434 Fixed Capacity Management: Not Supported 00:15:16.434 Variable Capacity Management: Not Supported 00:15:16.434 Delete Endurance Group: Not Supported 00:15:16.434 Delete NVM Set: Not Supported 00:15:16.434 Extended LBA Formats Supported: Not Supported 00:15:16.434 Flexible Data Placement Supported: Not Supported 00:15:16.434 00:15:16.434 Controller Memory Buffer Support 00:15:16.434 ================================ 00:15:16.434 Supported: No 00:15:16.434 00:15:16.434 Persistent Memory Region Support 00:15:16.434 ================================ 00:15:16.434 Supported: No 00:15:16.434 00:15:16.434 Admin Command Set Attributes 00:15:16.434 ============================ 00:15:16.434 Security Send/Receive: Not Supported 00:15:16.434 Format NVM: Not Supported 00:15:16.434 Firmware Activate/Download: Not Supported 00:15:16.434 Namespace Management: Not Supported 00:15:16.434 Device Self-Test: Not Supported 00:15:16.434 Directives: Not Supported 00:15:16.434 NVMe-MI: Not Supported 00:15:16.434 Virtualization Management: Not Supported 00:15:16.434 Doorbell Buffer Config: Not Supported 00:15:16.434 Get LBA Status Capability: Not Supported 00:15:16.434 Command & Feature Lockdown Capability: Not Supported 00:15:16.434 Abort Command Limit: 1 00:15:16.434 Async Event Request Limit: 4 00:15:16.434 Number of Firmware Slots: N/A 00:15:16.434 Firmware Slot 1 Read-Only: N/A 00:15:16.434 Firmware Activation Without Reset: N/A 00:15:16.434 Multiple Update Detection Support: N/A 00:15:16.434 Firmware Update Granularity: No Information Provided 00:15:16.434 Per-Namespace SMART Log: No 00:15:16.434 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.434 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:16.434 Command Effects Log Page: Not Supported 00:15:16.434 Get Log Page Extended Data: Supported 00:15:16.434 Telemetry Log Pages: Not Supported 00:15:16.434 Persistent Event Log Pages: Not Supported 00:15:16.434 Supported Log Pages Log Page: May Support 00:15:16.434 Commands Supported & Effects Log Page: Not Supported 00:15:16.434 Feature Identifiers & Effects Log Page:May Support 00:15:16.435 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.435 Data Area 4 for Telemetry Log: Not Supported 00:15:16.435 Error Log Page Entries Supported: 128 00:15:16.435 Keep Alive: Not Supported 00:15:16.435 00:15:16.435 NVM Command Set Attributes 00:15:16.435 ========================== 00:15:16.435 Submission Queue Entry Size 00:15:16.435 Max: 1 00:15:16.435 Min: 1 00:15:16.435 Completion Queue Entry Size 00:15:16.435 Max: 1 00:15:16.435 Min: 1 00:15:16.435 Number of Namespaces: 0 00:15:16.435 Compare Command: Not Supported 00:15:16.435 Write Uncorrectable Command: Not Supported 00:15:16.435 Dataset Management Command: Not Supported 00:15:16.435 Write Zeroes Command: Not Supported 00:15:16.435 Set Features Save Field: Not Supported 00:15:16.435 Reservations: Not Supported 00:15:16.435 Timestamp: Not Supported 00:15:16.435 Copy: Not Supported 00:15:16.435 Volatile Write Cache: Not Present 00:15:16.435 Atomic Write Unit (Normal): 1 00:15:16.435 Atomic Write Unit (PFail): 1 00:15:16.435 Atomic Compare & Write Unit: 1 00:15:16.435 Fused Compare & Write: Supported 00:15:16.435 Scatter-Gather List 00:15:16.435 SGL Command Set: Supported 00:15:16.435 SGL Keyed: Supported 00:15:16.435 SGL Bit Bucket Descriptor: Not Supported 00:15:16.435 SGL Metadata Pointer: Not Supported 00:15:16.435 Oversized SGL: Not Supported 00:15:16.435 SGL Metadata Address: Not Supported 00:15:16.435 SGL Offset: Supported 00:15:16.435 Transport SGL Data Block: Not Supported 00:15:16.435 Replay Protected Memory Block: Not Supported 00:15:16.435 00:15:16.435 Firmware Slot Information 00:15:16.435 ========================= 00:15:16.435 Active slot: 0 00:15:16.435 00:15:16.435 00:15:16.435 Error Log 00:15:16.435 ========= 00:15:16.435 00:15:16.435 Active Namespaces 00:15:16.435 ================= 00:15:16.435 Discovery Log Page 00:15:16.435 ================== 00:15:16.435 Generation Counter: 2 00:15:16.435 Number of Records: 2 00:15:16.435 Record Format: 0 00:15:16.435 00:15:16.435 Discovery Log Entry 0 00:15:16.435 ---------------------- 00:15:16.435 Transport Type: 3 (TCP) 00:15:16.435 Address Family: 1 (IPv4) 00:15:16.435 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:16.435 Entry Flags: 00:15:16.435 Duplicate Returned Information: 1 00:15:16.435 Explicit Persistent Connection Support for Discovery: 1 00:15:16.435 Transport Requirements: 00:15:16.435 Secure Channel: Not Required 00:15:16.435 Port ID: 0 (0x0000) 00:15:16.435 Controller ID: 65535 (0xffff) 00:15:16.435 Admin Max SQ Size: 128 00:15:16.435 Transport Service Identifier: 4420 00:15:16.435 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:16.435 Transport Address: 10.0.0.2 00:15:16.435 Discovery Log Entry 1 00:15:16.435 ---------------------- 00:15:16.435 Transport Type: 3 (TCP) 00:15:16.435 Address Family: 1 (IPv4) 00:15:16.435 Subsystem Type: 2 (NVM Subsystem) 00:15:16.435 Entry Flags: 00:15:16.435 Duplicate Returned Information: 0 00:15:16.435 Explicit Persistent Connection Support for Discovery: 0 00:15:16.435 Transport Requirements: 00:15:16.435 Secure Channel: Not Required 00:15:16.435 Port ID: 0 (0x0000) 00:15:16.435 Controller ID: 65535 (0xffff) 00:15:16.435 Admin Max SQ Size: 128 00:15:16.435 Transport Service Identifier: 4420 00:15:16.435 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:16.435 Transport Address: 10.0.0.2 [2024-07-15 19:05:43.492189] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:16.435 [2024-07-15 19:05:43.492207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1961f00) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.435 [2024-07-15 19:05:43.492222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962080) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.435 [2024-07-15 19:05:43.492232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962200) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.435 [2024-07-15 19:05:43.492243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.435 [2024-07-15 19:05:43.492258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492547] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:16.435 [2024-07-15 19:05:43.492553] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:16.435 [2024-07-15 19:05:43.492565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.492897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.435 [2024-07-15 19:05:43.492914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.435 [2024-07-15 19:05:43.492932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.435 [2024-07-15 19:05:43.492975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.435 [2024-07-15 19:05:43.492982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.435 [2024-07-15 19:05:43.492985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.435 [2024-07-15 19:05:43.492990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.435 [2024-07-15 19:05:43.493001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.493894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.493911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.493929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.493975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.493988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.493992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.493997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.494008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.494014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.494018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.494025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.436 [2024-07-15 19:05:43.494045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.436 [2024-07-15 19:05:43.494094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.436 [2024-07-15 19:05:43.494101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.436 [2024-07-15 19:05:43.494105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.494110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.436 [2024-07-15 19:05:43.494120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.494126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.436 [2024-07-15 19:05:43.494130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.436 [2024-07-15 19:05:43.494137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.494156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.437 [2024-07-15 19:05:43.494204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.494211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.494215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.437 [2024-07-15 19:05:43.494230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.437 [2024-07-15 19:05:43.494247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.494265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.437 [2024-07-15 19:05:43.494308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.494315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.494319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.437 [2024-07-15 19:05:43.494334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.437 [2024-07-15 19:05:43.494350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.494369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.437 [2024-07-15 19:05:43.494418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.494430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.494435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.437 [2024-07-15 19:05:43.494451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.494460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.437 [2024-07-15 19:05:43.494468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.494488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.437 [2024-07-15 19:05:43.498521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.498543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.498548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.498553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.437 [2024-07-15 19:05:43.498568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.498574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.498578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ff510) 00:15:16.437 [2024-07-15 19:05:43.498587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.498613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1962380, cid 3, qid 0 00:15:16.437 [2024-07-15 19:05:43.498659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.498666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.498670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.498675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1962380) on tqpair=0x18ff510 00:15:16.437 [2024-07-15 19:05:43.498684] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:15:16.437 00:15:16.437 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:16.437 [2024-07-15 19:05:43.549815] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:16.437 [2024-07-15 19:05:43.549878] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74899 ] 00:15:16.437 [2024-07-15 19:05:43.692512] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:16.437 [2024-07-15 19:05:43.692592] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:16.437 [2024-07-15 19:05:43.692600] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:16.437 [2024-07-15 19:05:43.692615] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:16.437 [2024-07-15 19:05:43.692623] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:16.437 [2024-07-15 19:05:43.692773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:16.437 [2024-07-15 19:05:43.692847] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd6510 0 00:15:16.437 [2024-07-15 19:05:43.705536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:16.437 [2024-07-15 19:05:43.705560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:16.437 [2024-07-15 19:05:43.705566] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:16.437 [2024-07-15 19:05:43.705570] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:16.437 [2024-07-15 19:05:43.705629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.705637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.705641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.437 [2024-07-15 19:05:43.705656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:16.437 [2024-07-15 19:05:43.705689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.437 [2024-07-15 19:05:43.713551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.713572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.713578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.437 [2024-07-15 19:05:43.713597] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:16.437 [2024-07-15 19:05:43.713605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:16.437 [2024-07-15 19:05:43.713612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:16.437 [2024-07-15 19:05:43.713630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.437 [2024-07-15 19:05:43.713649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.713676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.437 [2024-07-15 19:05:43.713733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.713741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.713744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.437 [2024-07-15 19:05:43.713754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:16.437 [2024-07-15 19:05:43.713762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:16.437 [2024-07-15 19:05:43.713770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.437 [2024-07-15 19:05:43.713795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.713822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.437 [2024-07-15 19:05:43.713878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.713885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.713889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.437 [2024-07-15 19:05:43.713899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:16.437 [2024-07-15 19:05:43.713908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:16.437 [2024-07-15 19:05:43.713916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.713924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.437 [2024-07-15 19:05:43.713948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.713967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.437 [2024-07-15 19:05:43.714013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.714021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.437 [2024-07-15 19:05:43.714025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.714029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.437 [2024-07-15 19:05:43.714035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:16.437 [2024-07-15 19:05:43.714046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.714051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.437 [2024-07-15 19:05:43.714055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.437 [2024-07-15 19:05:43.714062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.437 [2024-07-15 19:05:43.714080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.437 [2024-07-15 19:05:43.714130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.437 [2024-07-15 19:05:43.714137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.714141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.714151] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:16.438 [2024-07-15 19:05:43.714156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:16.438 [2024-07-15 19:05:43.714165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:16.438 [2024-07-15 19:05:43.714271] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:16.438 [2024-07-15 19:05:43.714276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:16.438 [2024-07-15 19:05:43.714286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.438 [2024-07-15 19:05:43.714321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.438 [2024-07-15 19:05:43.714368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.714375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.714379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.714389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:16.438 [2024-07-15 19:05:43.714400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.438 [2024-07-15 19:05:43.714434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.438 [2024-07-15 19:05:43.714481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.714488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.714492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.714504] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:16.438 [2024-07-15 19:05:43.714509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.714518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:16.438 [2024-07-15 19:05:43.714529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.714555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.438 [2024-07-15 19:05:43.714590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.438 [2024-07-15 19:05:43.714690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.438 [2024-07-15 19:05:43.714698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.438 [2024-07-15 19:05:43.714702] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714706] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=4096, cccid=0 00:15:16.438 [2024-07-15 19:05:43.714712] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2038f00) on tqpair(0x1fd6510): expected_datao=0, payload_size=4096 00:15:16.438 [2024-07-15 19:05:43.714717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714730] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.714745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.714749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.714762] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:16.438 [2024-07-15 19:05:43.714768] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:16.438 [2024-07-15 19:05:43.714773] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:16.438 [2024-07-15 19:05:43.714778] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:16.438 [2024-07-15 19:05:43.714783] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:16.438 [2024-07-15 19:05:43.714789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.714798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.714806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.438 [2024-07-15 19:05:43.714843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.438 [2024-07-15 19:05:43.714896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.714903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.714907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.714919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.438 [2024-07-15 19:05:43.714942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.438 [2024-07-15 19:05:43.714963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.438 [2024-07-15 19:05:43.714983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.714991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.714997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.438 [2024-07-15 19:05:43.715004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.715018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.715026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.715031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.715038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.438 [2024-07-15 19:05:43.715059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2038f00, cid 0, qid 0 00:15:16.438 [2024-07-15 19:05:43.715066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039080, cid 1, qid 0 00:15:16.438 [2024-07-15 19:05:43.715071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039200, cid 2, qid 0 00:15:16.438 [2024-07-15 19:05:43.715076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.438 [2024-07-15 19:05:43.715081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.438 [2024-07-15 19:05:43.715217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.715224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.438 [2024-07-15 19:05:43.715228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.715232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.438 [2024-07-15 19:05:43.715237] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:16.438 [2024-07-15 19:05:43.715247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.715257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.715264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:16.438 [2024-07-15 19:05:43.715271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.715275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.438 [2024-07-15 19:05:43.715279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.438 [2024-07-15 19:05:43.715286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:16.438 [2024-07-15 19:05:43.715305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.438 [2024-07-15 19:05:43.715350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.438 [2024-07-15 19:05:43.715357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.715361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.715446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.715480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.715500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.439 [2024-07-15 19:05:43.715573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.439 [2024-07-15 19:05:43.715583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.439 [2024-07-15 19:05:43.715587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=4096, cccid=4 00:15:16.439 [2024-07-15 19:05:43.715596] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039500) on tqpair(0x1fd6510): expected_datao=0, payload_size=4096 00:15:16.439 [2024-07-15 19:05:43.715601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715608] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715613] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.715628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.715632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.715653] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:16.439 [2024-07-15 19:05:43.715665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.715698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.715719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.439 [2024-07-15 19:05:43.715792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.439 [2024-07-15 19:05:43.715800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.439 [2024-07-15 19:05:43.715804] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715808] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=4096, cccid=4 00:15:16.439 [2024-07-15 19:05:43.715813] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039500) on tqpair(0x1fd6510): expected_datao=0, payload_size=4096 00:15:16.439 [2024-07-15 19:05:43.715818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715825] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715829] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.715843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.715847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.715868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.715888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.715893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.715902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.715922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.439 [2024-07-15 19:05:43.715993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.439 [2024-07-15 19:05:43.715999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.439 [2024-07-15 19:05:43.716003] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716007] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=4096, cccid=4 00:15:16.439 [2024-07-15 19:05:43.716012] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039500) on tqpair(0x1fd6510): expected_datao=0, payload_size=4096 00:15:16.439 [2024-07-15 19:05:43.716017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716024] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716028] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.716059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716105] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:16.439 [2024-07-15 19:05:43.716110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:16.439 [2024-07-15 19:05:43.716116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:16.439 [2024-07-15 19:05:43.716172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.716205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.716213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.716227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.439 [2024-07-15 19:05:43.716274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.439 [2024-07-15 19:05:43.716283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039680, cid 5, qid 0 00:15:16.439 [2024-07-15 19:05:43.716357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.716381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039680) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.716406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.716419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.716437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039680, cid 5, qid 0 00:15:16.439 [2024-07-15 19:05:43.716497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039680) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.716545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.716557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.716579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039680, cid 5, qid 0 00:15:16.439 [2024-07-15 19:05:43.716627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039680) on tqpair=0x1fd6510 00:15:16.439 [2024-07-15 19:05:43.716653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.439 [2024-07-15 19:05:43.716658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd6510) 00:15:16.439 [2024-07-15 19:05:43.716665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.439 [2024-07-15 19:05:43.716683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039680, cid 5, qid 0 00:15:16.439 [2024-07-15 19:05:43.716734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.439 [2024-07-15 19:05:43.716741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.439 [2024-07-15 19:05:43.716744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.440 [2024-07-15 19:05:43.716749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039680) on tqpair=0x1fd6510 00:15:16.440 [2024-07-15 19:05:43.716769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.440 [2024-07-15 19:05:43.716775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd6510) 00:15:16.440 [2024-07-15 19:05:43.716783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.440 [2024-07-15 19:05:43.716791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.440 [2024-07-15 19:05:43.716795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd6510) 00:15:16.440 [2024-07-15 19:05:43.716802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.440 [2024-07-15 19:05:43.716817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.440 [2024-07-15 19:05:43.716821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fd6510) 00:15:16.701 [2024-07-15 19:05:43.716828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.701 [2024-07-15 19:05:43.716841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.716846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd6510) 00:15:16.701 [2024-07-15 19:05:43.716852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.701 [2024-07-15 19:05:43.716873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039680, cid 5, qid 0 00:15:16.701 [2024-07-15 19:05:43.716880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039500, cid 4, qid 0 00:15:16.701 [2024-07-15 19:05:43.716885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039800, cid 6, qid 0 00:15:16.701 [2024-07-15 19:05:43.716891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039980, cid 7, qid 0 00:15:16.701 [2024-07-15 19:05:43.717029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.701 [2024-07-15 19:05:43.717036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.701 [2024-07-15 19:05:43.717040] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717044] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=8192, cccid=5 00:15:16.701 [2024-07-15 19:05:43.717049] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039680) on tqpair(0x1fd6510): expected_datao=0, payload_size=8192 00:15:16.701 [2024-07-15 19:05:43.717054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717070] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717076] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.701 [2024-07-15 19:05:43.717088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.701 [2024-07-15 19:05:43.717092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=512, cccid=4 00:15:16.701 [2024-07-15 19:05:43.717101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039500) on tqpair(0x1fd6510): expected_datao=0, payload_size=512 00:15:16.701 [2024-07-15 19:05:43.717106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717112] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717116] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.701 [2024-07-15 19:05:43.717127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.701 [2024-07-15 19:05:43.717131] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717135] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=512, cccid=6 00:15:16.701 [2024-07-15 19:05:43.717139] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039800) on tqpair(0x1fd6510): expected_datao=0, payload_size=512 00:15:16.701 [2024-07-15 19:05:43.717144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717150] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717154] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:16.701 [2024-07-15 19:05:43.717165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:16.701 [2024-07-15 19:05:43.717169] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717172] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd6510): datao=0, datal=4096, cccid=7 00:15:16.701 [2024-07-15 19:05:43.717177] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039980) on tqpair(0x1fd6510): expected_datao=0, payload_size=4096 00:15:16.701 [2024-07-15 19:05:43.717182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717189] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717194] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.701 [2024-07-15 19:05:43.717208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.701 [2024-07-15 19:05:43.717212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039680) on tqpair=0x1fd6510 00:15:16.701 ===================================================== 00:15:16.701 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.701 ===================================================== 00:15:16.701 Controller Capabilities/Features 00:15:16.701 ================================ 00:15:16.701 Vendor ID: 8086 00:15:16.701 Subsystem Vendor ID: 8086 00:15:16.701 Serial Number: SPDK00000000000001 00:15:16.701 Model Number: SPDK bdev Controller 00:15:16.701 Firmware Version: 24.09 00:15:16.701 Recommended Arb Burst: 6 00:15:16.701 IEEE OUI Identifier: e4 d2 5c 00:15:16.701 Multi-path I/O 00:15:16.701 May have multiple subsystem ports: Yes 00:15:16.701 May have multiple controllers: Yes 00:15:16.701 Associated with SR-IOV VF: No 00:15:16.701 Max Data Transfer Size: 131072 00:15:16.701 Max Number of Namespaces: 32 00:15:16.701 Max Number of I/O Queues: 127 00:15:16.701 NVMe Specification Version (VS): 1.3 00:15:16.701 NVMe Specification Version (Identify): 1.3 00:15:16.701 Maximum Queue Entries: 128 00:15:16.701 Contiguous Queues Required: Yes 00:15:16.701 Arbitration Mechanisms Supported 00:15:16.701 Weighted Round Robin: Not Supported 00:15:16.701 Vendor Specific: Not Supported 00:15:16.701 Reset Timeout: 15000 ms 00:15:16.701 Doorbell Stride: 4 bytes 00:15:16.701 NVM Subsystem Reset: Not Supported 00:15:16.701 Command Sets Supported 00:15:16.701 NVM Command Set: Supported 00:15:16.701 Boot Partition: Not Supported 00:15:16.701 Memory Page Size Minimum: 4096 bytes 00:15:16.701 Memory Page Size Maximum: 4096 bytes 00:15:16.701 Persistent Memory Region: Not Supported 00:15:16.701 Optional Asynchronous Events Supported 00:15:16.701 Namespace Attribute Notices: Supported 00:15:16.701 Firmware Activation Notices: Not Supported 00:15:16.701 ANA Change Notices: Not Supported 00:15:16.701 PLE Aggregate Log Change Notices: Not Supported 00:15:16.701 LBA Status Info Alert Notices: Not Supported 00:15:16.701 EGE Aggregate Log Change Notices: Not Supported 00:15:16.701 Normal NVM Subsystem Shutdown event: Not Supported 00:15:16.701 Zone Descriptor Change Notices: Not Supported 00:15:16.701 Discovery Log Change Notices: Not Supported 00:15:16.701 Controller Attributes 00:15:16.701 128-bit Host Identifier: Supported 00:15:16.701 Non-Operational Permissive Mode: Not Supported 00:15:16.701 NVM Sets: Not Supported 00:15:16.701 Read Recovery Levels: Not Supported 00:15:16.701 Endurance Groups: Not Supported 00:15:16.701 Predictable Latency Mode: Not Supported 00:15:16.701 Traffic Based Keep ALive: Not Supported 00:15:16.701 Namespace Granularity: Not Supported 00:15:16.701 SQ Associations: Not Supported 00:15:16.701 UUID List: Not Supported 00:15:16.701 Multi-Domain Subsystem: Not Supported 00:15:16.701 Fixed Capacity Management: Not Supported 00:15:16.701 Variable Capacity Management: Not Supported 00:15:16.701 Delete Endurance Group: Not Supported 00:15:16.701 Delete NVM Set: Not Supported 00:15:16.701 Extended LBA Formats Supported: Not Supported 00:15:16.701 Flexible Data Placement Supported: Not Supported 00:15:16.701 00:15:16.701 Controller Memory Buffer Support 00:15:16.701 ================================ 00:15:16.701 Supported: No 00:15:16.701 00:15:16.701 Persistent Memory Region Support 00:15:16.701 ================================ 00:15:16.701 Supported: No 00:15:16.701 00:15:16.701 Admin Command Set Attributes 00:15:16.701 ============================ 00:15:16.701 Security Send/Receive: Not Supported 00:15:16.701 Format NVM: Not Supported 00:15:16.701 Firmware Activate/Download: Not Supported 00:15:16.701 Namespace Management: Not Supported 00:15:16.701 Device Self-Test: Not Supported 00:15:16.701 Directives: Not Supported 00:15:16.701 NVMe-MI: Not Supported 00:15:16.701 Virtualization Management: Not Supported 00:15:16.701 Doorbell Buffer Config: Not Supported 00:15:16.701 Get LBA Status Capability: Not Supported 00:15:16.701 Command & Feature Lockdown Capability: Not Supported 00:15:16.701 Abort Command Limit: 4 00:15:16.701 Async Event Request Limit: 4 00:15:16.701 Number of Firmware Slots: N/A 00:15:16.701 Firmware Slot 1 Read-Only: N/A 00:15:16.701 Firmware Activation Without Reset: [2024-07-15 19:05:43.717234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.701 [2024-07-15 19:05:43.717255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.701 [2024-07-15 19:05:43.717259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039500) on tqpair=0x1fd6510 00:15:16.701 [2024-07-15 19:05:43.717276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.701 [2024-07-15 19:05:43.717299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.701 [2024-07-15 19:05:43.717303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.701 [2024-07-15 19:05:43.717307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039800) on tqpair=0x1fd6510 00:15:16.701 [2024-07-15 19:05:43.717314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.701 [2024-07-15 19:05:43.717320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.701 [2024-07-15 19:05:43.717324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.717328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039980) on tqpair=0x1fd6510 00:15:16.702 N/A 00:15:16.702 Multiple Update Detection Support: N/A 00:15:16.702 Firmware Update Granularity: No Information Provided 00:15:16.702 Per-Namespace SMART Log: No 00:15:16.702 Asymmetric Namespace Access Log Page: Not Supported 00:15:16.702 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:16.702 Command Effects Log Page: Supported 00:15:16.702 Get Log Page Extended Data: Supported 00:15:16.702 Telemetry Log Pages: Not Supported 00:15:16.702 Persistent Event Log Pages: Not Supported 00:15:16.702 Supported Log Pages Log Page: May Support 00:15:16.702 Commands Supported & Effects Log Page: Not Supported 00:15:16.702 Feature Identifiers & Effects Log Page:May Support 00:15:16.702 NVMe-MI Commands & Effects Log Page: May Support 00:15:16.702 Data Area 4 for Telemetry Log: Not Supported 00:15:16.702 Error Log Page Entries Supported: 128 00:15:16.702 Keep Alive: Supported 00:15:16.702 Keep Alive Granularity: 10000 ms 00:15:16.702 00:15:16.702 NVM Command Set Attributes 00:15:16.702 ========================== 00:15:16.702 Submission Queue Entry Size 00:15:16.702 Max: 64 00:15:16.702 Min: 64 00:15:16.702 Completion Queue Entry Size 00:15:16.702 Max: 16 00:15:16.702 Min: 16 00:15:16.702 Number of Namespaces: 32 00:15:16.702 Compare Command: Supported 00:15:16.702 Write Uncorrectable Command: Not Supported 00:15:16.702 Dataset Management Command: Supported 00:15:16.702 Write Zeroes Command: Supported 00:15:16.702 Set Features Save Field: Not Supported 00:15:16.702 Reservations: Supported 00:15:16.702 Timestamp: Not Supported 00:15:16.702 Copy: Supported 00:15:16.702 Volatile Write Cache: Present 00:15:16.702 Atomic Write Unit (Normal): 1 00:15:16.702 Atomic Write Unit (PFail): 1 00:15:16.702 Atomic Compare & Write Unit: 1 00:15:16.702 Fused Compare & Write: Supported 00:15:16.702 Scatter-Gather List 00:15:16.702 SGL Command Set: Supported 00:15:16.702 SGL Keyed: Supported 00:15:16.702 SGL Bit Bucket Descriptor: Not Supported 00:15:16.702 SGL Metadata Pointer: Not Supported 00:15:16.702 Oversized SGL: Not Supported 00:15:16.702 SGL Metadata Address: Not Supported 00:15:16.702 SGL Offset: Supported 00:15:16.702 Transport SGL Data Block: Not Supported 00:15:16.702 Replay Protected Memory Block: Not Supported 00:15:16.702 00:15:16.702 Firmware Slot Information 00:15:16.702 ========================= 00:15:16.702 Active slot: 1 00:15:16.702 Slot 1 Firmware Revision: 24.09 00:15:16.702 00:15:16.702 00:15:16.702 Commands Supported and Effects 00:15:16.702 ============================== 00:15:16.702 Admin Commands 00:15:16.702 -------------- 00:15:16.702 Get Log Page (02h): Supported 00:15:16.702 Identify (06h): Supported 00:15:16.702 Abort (08h): Supported 00:15:16.702 Set Features (09h): Supported 00:15:16.702 Get Features (0Ah): Supported 00:15:16.702 Asynchronous Event Request (0Ch): Supported 00:15:16.702 Keep Alive (18h): Supported 00:15:16.702 I/O Commands 00:15:16.702 ------------ 00:15:16.702 Flush (00h): Supported LBA-Change 00:15:16.702 Write (01h): Supported LBA-Change 00:15:16.702 Read (02h): Supported 00:15:16.702 Compare (05h): Supported 00:15:16.702 Write Zeroes (08h): Supported LBA-Change 00:15:16.702 Dataset Management (09h): Supported LBA-Change 00:15:16.702 Copy (19h): Supported LBA-Change 00:15:16.702 00:15:16.702 Error Log 00:15:16.702 ========= 00:15:16.702 00:15:16.702 Arbitration 00:15:16.702 =========== 00:15:16.702 Arbitration Burst: 1 00:15:16.702 00:15:16.702 Power Management 00:15:16.702 ================ 00:15:16.702 Number of Power States: 1 00:15:16.702 Current Power State: Power State #0 00:15:16.702 Power State #0: 00:15:16.702 Max Power: 0.00 W 00:15:16.702 Non-Operational State: Operational 00:15:16.702 Entry Latency: Not Reported 00:15:16.702 Exit Latency: Not Reported 00:15:16.702 Relative Read Throughput: 0 00:15:16.702 Relative Read Latency: 0 00:15:16.702 Relative Write Throughput: 0 00:15:16.702 Relative Write Latency: 0 00:15:16.702 Idle Power: Not Reported 00:15:16.702 Active Power: Not Reported 00:15:16.702 Non-Operational Permissive Mode: Not Supported 00:15:16.702 00:15:16.702 Health Information 00:15:16.702 ================== 00:15:16.702 Critical Warnings: 00:15:16.702 Available Spare Space: OK 00:15:16.702 Temperature: OK 00:15:16.702 Device Reliability: OK 00:15:16.702 Read Only: No 00:15:16.702 Volatile Memory Backup: OK 00:15:16.702 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:16.702 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:16.702 Available Spare: 0% 00:15:16.702 Available Spare Threshold: 0% 00:15:16.702 Life Percentage Used:[2024-07-15 19:05:43.717440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.717447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd6510) 00:15:16.702 [2024-07-15 19:05:43.717456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.702 [2024-07-15 19:05:43.717479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039980, cid 7, qid 0 00:15:16.702 [2024-07-15 19:05:43.721539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.702 [2024-07-15 19:05:43.721562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.702 [2024-07-15 19:05:43.721567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039980) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721617] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:16.702 [2024-07-15 19:05:43.721630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2038f00) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.702 [2024-07-15 19:05:43.721643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039080) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.702 [2024-07-15 19:05:43.721653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039200) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.702 [2024-07-15 19:05:43.721663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.702 [2024-07-15 19:05:43.721678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.702 [2024-07-15 19:05:43.721694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.702 [2024-07-15 19:05:43.721742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.702 [2024-07-15 19:05:43.721790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.702 [2024-07-15 19:05:43.721797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.702 [2024-07-15 19:05:43.721801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.702 [2024-07-15 19:05:43.721830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.702 [2024-07-15 19:05:43.721851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.702 [2024-07-15 19:05:43.721914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.702 [2024-07-15 19:05:43.721921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.702 [2024-07-15 19:05:43.721925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.721951] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:16.702 [2024-07-15 19:05:43.721957] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:16.702 [2024-07-15 19:05:43.721967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.721976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.702 [2024-07-15 19:05:43.721984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.702 [2024-07-15 19:05:43.722002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.702 [2024-07-15 19:05:43.722045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.702 [2024-07-15 19:05:43.722052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.702 [2024-07-15 19:05:43.722056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.722061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.702 [2024-07-15 19:05:43.722072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.722078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.702 [2024-07-15 19:05:43.722082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.702 [2024-07-15 19:05:43.722090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.702 [2024-07-15 19:05:43.722116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.702 [2024-07-15 19:05:43.722161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.702 [2024-07-15 19:05:43.722168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.702 [2024-07-15 19:05:43.722172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.722901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.722917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.722934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.722977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.722985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.722988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.722992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.723090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.723094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.723204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.723207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.723312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.723316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.723429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.723433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.703 [2024-07-15 19:05:43.723542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.703 [2024-07-15 19:05:43.723559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.703 [2024-07-15 19:05:43.723575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.703 [2024-07-15 19:05:43.723583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.703 [2024-07-15 19:05:43.723591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.703 [2024-07-15 19:05:43.723610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.703 [2024-07-15 19:05:43.723671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.723678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.723682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.723696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.723712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.723729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.723772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.723784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.723789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.723804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.723837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.723856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.723899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.723906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.723910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.723924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.723933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.723941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.723958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.724900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.724907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.724912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.724927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.724935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.724943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.704 [2024-07-15 19:05:43.724974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.704 [2024-07-15 19:05:43.725020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.704 [2024-07-15 19:05:43.725026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.704 [2024-07-15 19:05:43.725030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.725034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.704 [2024-07-15 19:05:43.725044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.725049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.704 [2024-07-15 19:05:43.725052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.704 [2024-07-15 19:05:43.725059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.725075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.725116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.725123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.725126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.725141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.705 [2024-07-15 19:05:43.725157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.725174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.725231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.725238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.725242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.725256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.705 [2024-07-15 19:05:43.725273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.725290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.725334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.725341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.725345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.725359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.705 [2024-07-15 19:05:43.725376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.725394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.725441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.725451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.725455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.725469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.725478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.705 [2024-07-15 19:05:43.725486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.725502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.729519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.729540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.729545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.729549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.729564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.729570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.729574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd6510) 00:15:16.705 [2024-07-15 19:05:43.729583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.705 [2024-07-15 19:05:43.729609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039380, cid 3, qid 0 00:15:16.705 [2024-07-15 19:05:43.729693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:16.705 [2024-07-15 19:05:43.729700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:16.705 [2024-07-15 19:05:43.729704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:16.705 [2024-07-15 19:05:43.729708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2039380) on tqpair=0x1fd6510 00:15:16.705 [2024-07-15 19:05:43.729717] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:16.705 0% 00:15:16.705 Data Units Read: 0 00:15:16.705 Data Units Written: 0 00:15:16.705 Host Read Commands: 0 00:15:16.705 Host Write Commands: 0 00:15:16.705 Controller Busy Time: 0 minutes 00:15:16.705 Power Cycles: 0 00:15:16.705 Power On Hours: 0 hours 00:15:16.705 Unsafe Shutdowns: 0 00:15:16.705 Unrecoverable Media Errors: 0 00:15:16.705 Lifetime Error Log Entries: 0 00:15:16.705 Warning Temperature Time: 0 minutes 00:15:16.705 Critical Temperature Time: 0 minutes 00:15:16.705 00:15:16.705 Number of Queues 00:15:16.705 ================ 00:15:16.705 Number of I/O Submission Queues: 127 00:15:16.705 Number of I/O Completion Queues: 127 00:15:16.705 00:15:16.705 Active Namespaces 00:15:16.705 ================= 00:15:16.705 Namespace ID:1 00:15:16.705 Error Recovery Timeout: Unlimited 00:15:16.705 Command Set Identifier: NVM (00h) 00:15:16.705 Deallocate: Supported 00:15:16.705 Deallocated/Unwritten Error: Not Supported 00:15:16.705 Deallocated Read Value: Unknown 00:15:16.705 Deallocate in Write Zeroes: Not Supported 00:15:16.705 Deallocated Guard Field: 0xFFFF 00:15:16.705 Flush: Supported 00:15:16.705 Reservation: Supported 00:15:16.705 Namespace Sharing Capabilities: Multiple Controllers 00:15:16.705 Size (in LBAs): 131072 (0GiB) 00:15:16.705 Capacity (in LBAs): 131072 (0GiB) 00:15:16.705 Utilization (in LBAs): 131072 (0GiB) 00:15:16.705 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:16.705 EUI64: ABCDEF0123456789 00:15:16.705 UUID: d90dd946-cc29-43e0-bb56-35f0771f7a50 00:15:16.705 Thin Provisioning: Not Supported 00:15:16.705 Per-NS Atomic Units: Yes 00:15:16.705 Atomic Boundary Size (Normal): 0 00:15:16.705 Atomic Boundary Size (PFail): 0 00:15:16.705 Atomic Boundary Offset: 0 00:15:16.705 Maximum Single Source Range Length: 65535 00:15:16.705 Maximum Copy Length: 65535 00:15:16.705 Maximum Source Range Count: 1 00:15:16.705 NGUID/EUI64 Never Reused: No 00:15:16.705 Namespace Write Protected: No 00:15:16.705 Number of LBA Formats: 1 00:15:16.705 Current LBA Format: LBA Format #00 00:15:16.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:16.705 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.705 rmmod nvme_tcp 00:15:16.705 rmmod nvme_fabrics 00:15:16.705 rmmod nvme_keyring 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74862 ']' 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74862 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74862 ']' 00:15:16.705 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74862 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74862 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.706 killing process with pid 74862 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74862' 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74862 00:15:16.706 19:05:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74862 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.964 ************************************ 00:15:16.964 END TEST nvmf_identify 00:15:16.964 ************************************ 00:15:16.964 00:15:16.964 real 0m2.530s 00:15:16.964 user 0m7.084s 00:15:16.964 sys 0m0.656s 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.964 19:05:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:16.964 19:05:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.964 19:05:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:16.964 19:05:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.964 19:05:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.964 19:05:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.222 ************************************ 00:15:17.222 START TEST nvmf_perf 00:15:17.222 ************************************ 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:17.222 * Looking for test storage... 00:15:17.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.222 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:17.223 Cannot find device "nvmf_tgt_br" 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.223 Cannot find device "nvmf_tgt_br2" 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:17.223 Cannot find device "nvmf_tgt_br" 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:17.223 Cannot find device "nvmf_tgt_br2" 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:17.223 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:17.481 00:15:17.481 --- 10.0.0.2 ping statistics --- 00:15:17.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.481 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:17.481 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:17.481 00:15:17.481 --- 10.0.0.3 ping statistics --- 00:15:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.482 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:17.482 00:15:17.482 --- 10.0.0.1 ping statistics --- 00:15:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.482 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75065 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75065 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75065 ']' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.482 19:05:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.741 [2024-07-15 19:05:44.792652] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:17.741 [2024-07-15 19:05:44.792716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.741 [2024-07-15 19:05:44.926391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.000 [2024-07-15 19:05:45.034897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.000 [2024-07-15 19:05:45.035172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.000 [2024-07-15 19:05:45.035325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.000 [2024-07-15 19:05:45.035559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.000 [2024-07-15 19:05:45.035735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.000 [2024-07-15 19:05:45.035855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.000 [2024-07-15 19:05:45.035980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.000 [2024-07-15 19:05:45.036549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.000 [2024-07-15 19:05:45.036560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.000 [2024-07-15 19:05:45.108809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:18.567 19:05:45 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:19.136 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:19.136 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:19.394 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:19.394 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.654 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:19.654 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:19.654 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:19.654 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:19.654 19:05:46 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:19.938 [2024-07-15 19:05:47.021323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.938 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.196 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:20.196 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.454 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:20.454 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:20.713 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.713 [2024-07-15 19:05:47.950504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.713 19:05:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.972 19:05:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:20.972 19:05:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:20.972 19:05:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:20.972 19:05:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:22.351 Initializing NVMe Controllers 00:15:22.351 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:22.351 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:22.351 Initialization complete. Launching workers. 00:15:22.351 ======================================================== 00:15:22.351 Latency(us) 00:15:22.351 Device Information : IOPS MiB/s Average min max 00:15:22.351 PCIE (0000:00:10.0) NSID 1 from core 0: 24832.93 97.00 1288.74 363.63 7846.08 00:15:22.351 ======================================================== 00:15:22.351 Total : 24832.93 97.00 1288.74 363.63 7846.08 00:15:22.351 00:15:22.351 19:05:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:23.288 Initializing NVMe Controllers 00:15:23.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:23.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:23.288 Initialization complete. Launching workers. 00:15:23.288 ======================================================== 00:15:23.288 Latency(us) 00:15:23.288 Device Information : IOPS MiB/s Average min max 00:15:23.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3673.93 14.35 271.86 98.10 4310.39 00:15:23.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.49 0.48 8161.13 7863.15 12092.82 00:15:23.288 ======================================================== 00:15:23.288 Total : 3797.43 14.83 528.42 98.10 12092.82 00:15:23.288 00:15:23.548 19:05:50 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:24.924 Initializing NVMe Controllers 00:15:24.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:24.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:24.924 Initialization complete. Launching workers. 00:15:24.924 ======================================================== 00:15:24.924 Latency(us) 00:15:24.924 Device Information : IOPS MiB/s Average min max 00:15:24.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8804.00 34.39 3635.28 648.20 7746.74 00:15:24.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8037.26 5911.25 12833.32 00:15:24.924 ======================================================== 00:15:24.924 Total : 12804.00 50.02 5010.47 648.20 12833.32 00:15:24.924 00:15:24.924 19:05:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:24.924 19:05:51 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:27.453 Initializing NVMe Controllers 00:15:27.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.453 Controller IO queue size 128, less than required. 00:15:27.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.453 Controller IO queue size 128, less than required. 00:15:27.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:27.453 Initialization complete. Launching workers. 00:15:27.453 ======================================================== 00:15:27.453 Latency(us) 00:15:27.453 Device Information : IOPS MiB/s Average min max 00:15:27.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1639.44 409.86 78791.13 38631.85 123715.80 00:15:27.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 641.61 160.40 210201.89 47896.88 314729.17 00:15:27.453 ======================================================== 00:15:27.454 Total : 2281.04 570.26 115754.00 38631.85 314729.17 00:15:27.454 00:15:27.454 19:05:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:27.711 Initializing NVMe Controllers 00:15:27.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.711 Controller IO queue size 128, less than required. 00:15:27.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:27.711 Controller IO queue size 128, less than required. 00:15:27.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:27.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:27.711 WARNING: Some requested NVMe devices were skipped 00:15:27.711 No valid NVMe controllers or AIO or URING devices found 00:15:27.711 19:05:54 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:30.237 Initializing NVMe Controllers 00:15:30.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.237 Controller IO queue size 128, less than required. 00:15:30.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.237 Controller IO queue size 128, less than required. 00:15:30.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:30.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:30.237 Initialization complete. Launching workers. 00:15:30.237 00:15:30.237 ==================== 00:15:30.237 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:30.237 TCP transport: 00:15:30.237 polls: 8573 00:15:30.237 idle_polls: 4875 00:15:30.237 sock_completions: 3698 00:15:30.237 nvme_completions: 6403 00:15:30.237 submitted_requests: 9644 00:15:30.237 queued_requests: 1 00:15:30.237 00:15:30.237 ==================== 00:15:30.237 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:30.237 TCP transport: 00:15:30.237 polls: 8429 00:15:30.237 idle_polls: 4633 00:15:30.237 sock_completions: 3796 00:15:30.237 nvme_completions: 6877 00:15:30.237 submitted_requests: 10284 00:15:30.237 queued_requests: 1 00:15:30.237 ======================================================== 00:15:30.237 Latency(us) 00:15:30.237 Device Information : IOPS MiB/s Average min max 00:15:30.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1600.49 400.12 81944.73 41792.96 134410.24 00:15:30.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1718.99 429.75 74366.08 31552.47 108519.76 00:15:30.237 ======================================================== 00:15:30.237 Total : 3319.49 829.87 78020.14 31552.47 134410.24 00:15:30.237 00:15:30.237 19:05:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:30.237 19:05:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.496 rmmod nvme_tcp 00:15:30.496 rmmod nvme_fabrics 00:15:30.496 rmmod nvme_keyring 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75065 ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75065 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75065 ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75065 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75065 00:15:30.496 killing process with pid 75065 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75065' 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75065 00:15:30.496 19:05:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75065 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.064 19:05:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.323 19:05:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:31.323 00:15:31.323 real 0m14.117s 00:15:31.323 user 0m51.490s 00:15:31.323 sys 0m4.168s 00:15:31.323 19:05:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.323 19:05:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:31.323 ************************************ 00:15:31.323 END TEST nvmf_perf 00:15:31.323 ************************************ 00:15:31.323 19:05:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.323 19:05:58 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:31.323 19:05:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.323 19:05:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.323 19:05:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.323 ************************************ 00:15:31.323 START TEST nvmf_fio_host 00:15:31.323 ************************************ 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:31.323 * Looking for test storage... 00:15:31.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.323 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:31.324 Cannot find device "nvmf_tgt_br" 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.324 Cannot find device "nvmf_tgt_br2" 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.324 Cannot find device "nvmf_tgt_br" 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.324 Cannot find device "nvmf_tgt_br2" 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:31.324 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.583 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:31.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:31.842 00:15:31.842 --- 10.0.0.2 ping statistics --- 00:15:31.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.842 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:31.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:15:31.842 00:15:31.842 --- 10.0.0.3 ping statistics --- 00:15:31.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.842 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:31.842 00:15:31.842 --- 10.0.0.1 ping statistics --- 00:15:31.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.842 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75466 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75466 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75466 ']' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.842 19:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.842 [2024-07-15 19:05:58.980397] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:31.842 [2024-07-15 19:05:58.980483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.842 [2024-07-15 19:05:59.122274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.100 [2024-07-15 19:05:59.222924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.100 [2024-07-15 19:05:59.223219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.100 [2024-07-15 19:05:59.223243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.100 [2024-07-15 19:05:59.223251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.101 [2024-07-15 19:05:59.223258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.101 [2024-07-15 19:05:59.223470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.101 [2024-07-15 19:05:59.223609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.101 [2024-07-15 19:05:59.223670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.101 [2024-07-15 19:05:59.223967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.101 [2024-07-15 19:05:59.279565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:33.056 19:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.056 19:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:33.056 19:05:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.056 [2024-07-15 19:06:00.243492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.056 19:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:33.056 19:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.056 19:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.056 19:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:33.317 Malloc1 00:15:33.576 19:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.852 19:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.852 19:06:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.110 [2024-07-15 19:06:01.337783] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.110 19:06:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:34.370 19:06:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.628 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:34.628 fio-3.35 00:15:34.628 Starting 1 thread 00:15:37.161 00:15:37.161 test: (groupid=0, jobs=1): err= 0: pid=75549: Mon Jul 15 19:06:04 2024 00:15:37.161 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2008msec) 00:15:37.161 slat (nsec): min=1951, max=333987, avg=2562.52, stdev=3547.89 00:15:37.161 clat (usec): min=2625, max=13675, avg=7667.90, stdev=558.48 00:15:37.161 lat (usec): min=2670, max=13678, avg=7670.46, stdev=558.22 00:15:37.161 clat percentiles (usec): 00:15:37.161 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:15:37.161 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:15:37.161 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8586], 00:15:37.161 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[12911], 00:15:37.161 | 99.99th=[13566] 00:15:37.161 bw ( KiB/s): min=34272, max=35208, per=100.00%, avg=34758.00, stdev=482.54, samples=4 00:15:37.161 iops : min= 8568, max= 8802, avg=8689.50, stdev=120.64, samples=4 00:15:37.161 write: IOPS=8679, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2008msec); 0 zone resets 00:15:37.161 slat (usec): min=2, max=286, avg= 2.68, stdev= 2.86 00:15:37.161 clat (usec): min=2480, max=13593, avg=7009.70, stdev=523.23 00:15:37.161 lat (usec): min=2494, max=13596, avg=7012.38, stdev=523.04 00:15:37.161 clat percentiles (usec): 00:15:37.161 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:15:37.161 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:37.161 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:15:37.161 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[12649], 99.95th=[12911], 00:15:37.161 | 99.99th=[13435] 00:15:37.161 bw ( KiB/s): min=34136, max=35328, per=100.00%, avg=34728.00, stdev=566.85, samples=4 00:15:37.161 iops : min= 8534, max= 8832, avg=8682.00, stdev=141.71, samples=4 00:15:37.161 lat (msec) : 4=0.08%, 10=99.72%, 20=0.20% 00:15:37.161 cpu : usr=68.36%, sys=22.92%, ctx=6, majf=0, minf=7 00:15:37.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:37.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.161 issued rwts: total=17442,17428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.161 00:15:37.161 Run status group 0 (all jobs): 00:15:37.161 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2008-2008msec 00:15:37.161 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2008-2008msec 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:37.161 19:06:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:37.161 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:37.161 fio-3.35 00:15:37.161 Starting 1 thread 00:15:39.727 00:15:39.727 test: (groupid=0, jobs=1): err= 0: pid=75592: Mon Jul 15 19:06:06 2024 00:15:39.727 read: IOPS=7504, BW=117MiB/s (123MB/s)(236MiB/2009msec) 00:15:39.727 slat (usec): min=2, max=117, avg= 3.86, stdev= 2.46 00:15:39.728 clat (usec): min=3181, max=19123, avg=9479.55, stdev=2757.68 00:15:39.728 lat (usec): min=3185, max=19126, avg=9483.40, stdev=2757.71 00:15:39.728 clat percentiles (usec): 00:15:39.728 | 1.00th=[ 4621], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 7046], 00:15:39.728 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10028], 00:15:39.728 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12911], 95.00th=[14484], 00:15:39.728 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:15:39.728 | 99.99th=[19006] 00:15:39.728 bw ( KiB/s): min=54464, max=67808, per=51.64%, avg=62000.00, stdev=5555.02, samples=4 00:15:39.728 iops : min= 3404, max= 4238, avg=3875.00, stdev=347.19, samples=4 00:15:39.728 write: IOPS=4382, BW=68.5MiB/s (71.8MB/s)(126MiB/1847msec); 0 zone resets 00:15:39.728 slat (usec): min=31, max=371, avg=38.87, stdev= 9.18 00:15:39.728 clat (usec): min=6753, max=25824, avg=13205.74, stdev=2611.66 00:15:39.728 lat (usec): min=6786, max=25858, avg=13244.61, stdev=2611.77 00:15:39.728 clat percentiles (usec): 00:15:39.728 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10945], 00:15:39.728 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12911], 60.00th=[13698], 00:15:39.728 | 70.00th=[14484], 80.00th=[15533], 90.00th=[16712], 95.00th=[17433], 00:15:39.728 | 99.00th=[19792], 99.50th=[21103], 99.90th=[24249], 99.95th=[25560], 00:15:39.728 | 99.99th=[25822] 00:15:39.728 bw ( KiB/s): min=55488, max=70656, per=91.62%, avg=64248.00, stdev=6345.39, samples=4 00:15:39.728 iops : min= 3468, max= 4416, avg=4015.50, stdev=396.59, samples=4 00:15:39.728 lat (msec) : 4=0.15%, 10=41.78%, 20=57.80%, 50=0.28% 00:15:39.728 cpu : usr=83.22%, sys=12.95%, ctx=9, majf=0, minf=12 00:15:39.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:39.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.728 issued rwts: total=15076,8095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.728 00:15:39.728 Run status group 0 (all jobs): 00:15:39.728 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=236MiB (247MB), run=2009-2009msec 00:15:39.728 WRITE: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=126MiB (133MB), run=1847-1847msec 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.728 rmmod nvme_tcp 00:15:39.728 rmmod nvme_fabrics 00:15:39.728 rmmod nvme_keyring 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75466 ']' 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75466 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75466 ']' 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75466 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.728 19:06:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75466 00:15:39.728 killing process with pid 75466 00:15:39.728 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.728 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.728 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75466' 00:15:39.728 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75466 00:15:39.728 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75466 00:15:39.986 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.986 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.986 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.986 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.986 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.987 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.987 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.987 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.246 19:06:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:40.246 ************************************ 00:15:40.246 END TEST nvmf_fio_host 00:15:40.246 ************************************ 00:15:40.246 00:15:40.246 real 0m8.873s 00:15:40.246 user 0m36.371s 00:15:40.246 sys 0m2.270s 00:15:40.246 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.246 19:06:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.246 19:06:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.246 19:06:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:40.246 19:06:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.246 19:06:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.246 19:06:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.246 ************************************ 00:15:40.246 START TEST nvmf_failover 00:15:40.246 ************************************ 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:40.246 * Looking for test storage... 00:15:40.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:40.246 Cannot find device "nvmf_tgt_br" 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.246 Cannot find device "nvmf_tgt_br2" 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:40.246 Cannot find device "nvmf_tgt_br" 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:40.246 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:40.506 Cannot find device "nvmf_tgt_br2" 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.506 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:40.765 00:15:40.765 --- 10.0.0.2 ping statistics --- 00:15:40.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.765 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:40.765 00:15:40.765 --- 10.0.0.3 ping statistics --- 00:15:40.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.765 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:40.765 00:15:40.765 --- 10.0.0.1 ping statistics --- 00:15:40.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.765 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75815 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75815 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75815 ']' 00:15:40.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.765 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.766 19:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:40.766 [2024-07-15 19:06:07.888712] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:15:40.766 [2024-07-15 19:06:07.889801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.766 [2024-07-15 19:06:08.037198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.024 [2024-07-15 19:06:08.165424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.024 [2024-07-15 19:06:08.165737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.024 [2024-07-15 19:06:08.165908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.024 [2024-07-15 19:06:08.166050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.024 [2024-07-15 19:06:08.166114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.024 [2024-07-15 19:06:08.166390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.024 [2024-07-15 19:06:08.166475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.024 [2024-07-15 19:06:08.166482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.024 [2024-07-15 19:06:08.224054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.960 19:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.960 [2024-07-15 19:06:09.187353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.960 19:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:42.218 Malloc0 00:15:42.477 19:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:42.735 19:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.735 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.992 [2024-07-15 19:06:10.221921] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.992 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:43.249 [2024-07-15 19:06:10.474251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.249 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:43.506 [2024-07-15 19:06:10.706617] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:43.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75873 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75873 /var/tmp/bdevperf.sock 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75873 ']' 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.506 19:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:44.879 19:06:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.879 19:06:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:44.879 19:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.879 NVMe0n1 00:15:44.879 19:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:45.136 00:15:45.136 19:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75896 00:15:45.136 19:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.136 19:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:46.510 19:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.510 19:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:49.790 19:06:16 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.790 00:15:49.790 19:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:50.048 19:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:53.332 19:06:20 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.332 [2024-07-15 19:06:20.533389] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.332 19:06:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:54.709 19:06:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:54.709 19:06:21 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75896 00:16:01.292 0 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75873 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75873 ']' 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75873 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75873 00:16:01.292 killing process with pid 75873 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75873' 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75873 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75873 00:16:01.292 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:01.292 [2024-07-15 19:06:10.783624] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:01.292 [2024-07-15 19:06:10.783740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75873 ] 00:16:01.292 [2024-07-15 19:06:10.924013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.292 [2024-07-15 19:06:11.076166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.292 [2024-07-15 19:06:11.151669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:01.292 Running I/O for 15 seconds... 00:16:01.292 [2024-07-15 19:06:13.612832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.612945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.292 [2024-07-15 19:06:13.613174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.292 [2024-07-15 19:06:13.613189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.613614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.613984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.613999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.614012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.614042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.614071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.293 [2024-07-15 19:06:13.614100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.293 [2024-07-15 19:06:13.614533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.293 [2024-07-15 19:06:13.614551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.614812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.614974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.614987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.294 [2024-07-15 19:06:13.615259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.294 [2024-07-15 19:06:13.615828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.294 [2024-07-15 19:06:13.615843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.615856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.615872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.615900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.615913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.615928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.615941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.615956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.615969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.615990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.295 [2024-07-15 19:06:13.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.295 [2024-07-15 19:06:13.616721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8a2b0 is same with the state(5) to be set 00:16:01.295 [2024-07-15 19:06:13.616755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.616766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.616777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.616790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.616815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.616825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.616838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.616861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.616883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.616903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.616928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.616939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.616951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.616965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.616974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.616984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.617029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.617039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.617050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.617076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.617085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.295 [2024-07-15 19:06:13.617096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:16:01.295 [2024-07-15 19:06:13.617108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.295 [2024-07-15 19:06:13.617121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.295 [2024-07-15 19:06:13.617131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.296 [2024-07-15 19:06:13.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:16:01.296 [2024-07-15 19:06:13.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.296 [2024-07-15 19:06:13.617177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.296 [2024-07-15 19:06:13.617187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:16:01.296 [2024-07-15 19:06:13.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617277] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa8a2b0 was disconnected and freed. reset controller. 00:16:01.296 [2024-07-15 19:06:13.617296] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:01.296 [2024-07-15 19:06:13.617374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.296 [2024-07-15 19:06:13.617395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.296 [2024-07-15 19:06:13.617434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.296 [2024-07-15 19:06:13.617462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.296 [2024-07-15 19:06:13.617489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:13.617503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:01.296 [2024-07-15 19:06:13.621461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:01.296 [2024-07-15 19:06:13.621517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29710 (9): Bad file descriptor 00:16:01.296 [2024-07-15 19:06:13.656859] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:01.296 [2024-07-15 19:06:17.263049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.296 [2024-07-15 19:06:17.263779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.296 [2024-07-15 19:06:17.263957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.296 [2024-07-15 19:06:17.263970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.263984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.263996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.264564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.264982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.264997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.265015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.265043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.265070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.265097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.297 [2024-07-15 19:06:17.265124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.265151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.297 [2024-07-15 19:06:17.265197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.297 [2024-07-15 19:06:17.265212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.265624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.265977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.265991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.266005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.266039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.298 [2024-07-15 19:06:17.266066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.298 [2024-07-15 19:06:17.266485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.298 [2024-07-15 19:06:17.266500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.299 [2024-07-15 19:06:17.266518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8bd80 is same with the state(5) to be set 00:16:01.299 [2024-07-15 19:06:17.266561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88752 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.266957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.266967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.266989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.299 [2024-07-15 19:06:17.267398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.299 [2024-07-15 19:06:17.267408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:16:01.299 [2024-07-15 19:06:17.267420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267487] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa8bd80 was disconnected and freed. reset controller. 00:16:01.299 [2024-07-15 19:06:17.267505] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:01.299 [2024-07-15 19:06:17.267575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.299 [2024-07-15 19:06:17.267596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.299 [2024-07-15 19:06:17.267624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.299 [2024-07-15 19:06:17.267649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.299 [2024-07-15 19:06:17.267674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:17.267686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:01.299 [2024-07-15 19:06:17.267726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29710 (9): Bad file descriptor 00:16:01.299 [2024-07-15 19:06:17.271498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:01.299 [2024-07-15 19:06:17.308414] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:01.299 [2024-07-15 19:06:21.815332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.299 [2024-07-15 19:06:21.815407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:21.815463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.299 [2024-07-15 19:06:21.815482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:21.815512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.299 [2024-07-15 19:06:21.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:21.815546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.299 [2024-07-15 19:06:21.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.299 [2024-07-15 19:06:21.815574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.815974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.815988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.300 [2024-07-15 19:06:21.816746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.300 [2024-07-15 19:06:21.816976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.300 [2024-07-15 19:06:21.816990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.301 [2024-07-15 19:06:21.817767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.817976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.818006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.818021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.818037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.818068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.818083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.818099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.301 [2024-07-15 19:06:21.818113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.301 [2024-07-15 19:06:21.818129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:01.302 [2024-07-15 19:06:21.818541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.818975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.818991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:01.302 [2024-07-15 19:06:21.819006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7c10 is same with the state(5) to be set 00:16:01.302 [2024-07-15 19:06:21.819044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40760 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41216 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41224 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41232 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41240 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41248 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41256 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41264 len:8 PRP1 0x0 PRP2 0x0 00:16:01.302 [2024-07-15 19:06:21.819453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.302 [2024-07-15 19:06:21.819467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.302 [2024-07-15 19:06:21.819478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.302 [2024-07-15 19:06:21.819488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41272 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41280 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41288 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41304 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41312 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41320 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41328 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:01.303 [2024-07-15 19:06:21.819900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.303 [2024-07-15 19:06:21.819912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41336 len:8 PRP1 0x0 PRP2 0x0 00:16:01.303 [2024-07-15 19:06:21.819932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.819990] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaa7c10 was disconnected and freed. reset controller. 00:16:01.303 [2024-07-15 19:06:21.820010] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:01.303 [2024-07-15 19:06:21.820077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.303 [2024-07-15 19:06:21.820101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.820117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.303 [2024-07-15 19:06:21.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.820146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.303 [2024-07-15 19:06:21.820160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.820174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.303 [2024-07-15 19:06:21.820188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.303 [2024-07-15 19:06:21.820202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:01.303 [2024-07-15 19:06:21.824612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:01.303 [2024-07-15 19:06:21.824675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29710 (9): Bad file descriptor 00:16:01.303 [2024-07-15 19:06:21.861186] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:01.303 00:16:01.303 Latency(us) 00:16:01.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.303 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.303 Verification LBA range: start 0x0 length 0x4000 00:16:01.303 NVMe0n1 : 15.01 9156.02 35.77 219.42 0.00 13620.32 640.47 14715.81 00:16:01.303 =================================================================================================================== 00:16:01.303 Total : 9156.02 35.77 219.42 0.00 13620.32 640.47 14715.81 00:16:01.303 Received shutdown signal, test time was about 15.000000 seconds 00:16:01.303 00:16:01.303 Latency(us) 00:16:01.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.303 =================================================================================================================== 00:16:01.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76069 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76069 /var/tmp/bdevperf.sock 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76069 ']' 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.303 19:06:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:01.303 19:06:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.303 19:06:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:01.303 19:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.303 [2024-07-15 19:06:28.427893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:01.303 19:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:01.562 [2024-07-15 19:06:28.700164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:01.562 19:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.821 NVMe0n1 00:16:01.821 19:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.080 00:16:02.080 19:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.648 00:16:02.648 19:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:02.648 19:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.648 19:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:03.213 19:06:30 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:06.492 19:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.492 19:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:06.492 19:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76143 00:16:06.492 19:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:06.492 19:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76143 00:16:07.447 0 00:16:07.447 19:06:34 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:07.447 [2024-07-15 19:06:27.826685] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:07.447 [2024-07-15 19:06:27.826891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 00:16:07.447 [2024-07-15 19:06:27.959419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.447 [2024-07-15 19:06:28.059451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.447 [2024-07-15 19:06:28.116098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:07.447 [2024-07-15 19:06:30.176912] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:07.447 [2024-07-15 19:06:30.177044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.447 [2024-07-15 19:06:30.177071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.447 [2024-07-15 19:06:30.177090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.447 [2024-07-15 19:06:30.177104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.447 [2024-07-15 19:06:30.177118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.447 [2024-07-15 19:06:30.177132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.447 [2024-07-15 19:06:30.177146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.447 [2024-07-15 19:06:30.177160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.447 [2024-07-15 19:06:30.177182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:07.447 [2024-07-15 19:06:30.177236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:07.447 [2024-07-15 19:06:30.177269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94710 (9): Bad file descriptor 00:16:07.447 [2024-07-15 19:06:30.185054] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:07.447 Running I/O for 1 seconds... 00:16:07.447 00:16:07.447 Latency(us) 00:16:07.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.447 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.447 Verification LBA range: start 0x0 length 0x4000 00:16:07.447 NVMe0n1 : 1.01 6980.22 27.27 0.00 0.00 18262.17 2234.18 17277.67 00:16:07.447 =================================================================================================================== 00:16:07.447 Total : 6980.22 27.27 0.00 0.00 18262.17 2234.18 17277.67 00:16:07.447 19:06:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.447 19:06:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:07.705 19:06:34 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.963 19:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:07.963 19:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:08.221 19:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:08.479 19:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76069 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76069 ']' 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76069 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76069 00:16:11.781 killing process with pid 76069 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76069' 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76069 00:16:11.781 19:06:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76069 00:16:12.040 19:06:39 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:12.040 19:06:39 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.298 rmmod nvme_tcp 00:16:12.298 rmmod nvme_fabrics 00:16:12.298 rmmod nvme_keyring 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75815 ']' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75815 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75815 ']' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75815 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75815 00:16:12.298 killing process with pid 75815 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75815' 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75815 00:16:12.298 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75815 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.557 19:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:12.817 00:16:12.817 real 0m32.492s 00:16:12.817 user 2m5.304s 00:16:12.817 sys 0m5.878s 00:16:12.817 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.817 19:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:12.817 ************************************ 00:16:12.817 END TEST nvmf_failover 00:16:12.817 ************************************ 00:16:12.817 19:06:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.817 19:06:39 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.817 19:06:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.817 19:06:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.817 19:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.817 ************************************ 00:16:12.817 START TEST nvmf_host_discovery 00:16:12.817 ************************************ 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.817 * Looking for test storage... 00:16:12.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.817 19:06:39 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.817 Cannot find device "nvmf_tgt_br" 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:12.817 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.818 Cannot find device "nvmf_tgt_br2" 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.818 Cannot find device "nvmf_tgt_br" 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.818 Cannot find device "nvmf_tgt_br2" 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:12.818 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.076 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:13.336 00:16:13.336 --- 10.0.0.2 ping statistics --- 00:16:13.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.336 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:13.336 00:16:13.336 --- 10.0.0.3 ping statistics --- 00:16:13.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.336 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:13.336 00:16:13.336 --- 10.0.0.1 ping statistics --- 00:16:13.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.336 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76408 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76408 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76408 ']' 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.336 19:06:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.336 [2024-07-15 19:06:40.485024] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:13.336 [2024-07-15 19:06:40.485153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.595 [2024-07-15 19:06:40.625720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.595 [2024-07-15 19:06:40.745300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.595 [2024-07-15 19:06:40.745386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.595 [2024-07-15 19:06:40.745414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.595 [2024-07-15 19:06:40.745433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.595 [2024-07-15 19:06:40.745442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.595 [2024-07-15 19:06:40.745472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.595 [2024-07-15 19:06:40.803911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 [2024-07-15 19:06:41.536792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 [2024-07-15 19:06:41.544868] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 null0 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 null1 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76439 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76439 /tmp/host.sock 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76439 ']' 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.531 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.531 19:06:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 [2024-07-15 19:06:41.626080] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:14.531 [2024-07-15 19:06:41.626152] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76439 ] 00:16:14.531 [2024-07-15 19:06:41.762049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.790 [2024-07-15 19:06:41.889783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.790 [2024-07-15 19:06:41.947580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.357 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:15.616 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.876 19:06:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [2024-07-15 19:06:43.033285] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.876 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:16.139 19:06:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:16.399 [2024-07-15 19:06:43.660770] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:16.399 [2024-07-15 19:06:43.660812] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:16.399 [2024-07-15 19:06:43.660833] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:16.399 [2024-07-15 19:06:43.666817] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:16.659 [2024-07-15 19:06:43.724199] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:16.659 [2024-07-15 19:06:43.724250] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:17.226 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.227 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 [2024-07-15 19:06:44.607964] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.485 [2024-07-15 19:06:44.608634] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:17.485 [2024-07-15 19:06:44.608677] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.485 [2024-07-15 19:06:44.614603] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.486 [2024-07-15 19:06:44.675374] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:17.486 [2024-07-15 19:06:44.675407] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:17.486 [2024-07-15 19:06:44.675415] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.486 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.745 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 [2024-07-15 19:06:44.820822] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:17.746 [2024-07-15 19:06:44.820862] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:17.746 [2024-07-15 19:06:44.821258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.746 [2024-07-15 19:06:44.821296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.746 [2024-07-15 19:06:44.821310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.746 [2024-07-15 19:06:44.821320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.746 [2024-07-15 19:06:44.821331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.746 [2024-07-15 19:06:44.821341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.746 [2024-07-15 19:06:44.821351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.746 [2024-07-15 19:06:44.821361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.746 [2024-07-15 19:06:44.821370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75fa0 is same with the state(5) to be set 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.746 [2024-07-15 19:06:44.826813] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:17.746 [2024-07-15 19:06:44.826850] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:17.746 [2024-07-15 19:06:44.826914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75fa0 (9): Bad file descriptor 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.746 19:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:18.009 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:18.010 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.010 19:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:18.010 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.010 19:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 [2024-07-15 19:06:46.241603] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:19.383 [2024-07-15 19:06:46.241651] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:19.383 [2024-07-15 19:06:46.241672] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:19.383 [2024-07-15 19:06:46.247640] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:19.383 [2024-07-15 19:06:46.308356] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:19.383 [2024-07-15 19:06:46.308430] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 request: 00:16:19.383 { 00:16:19.383 "name": "nvme", 00:16:19.383 "trtype": "tcp", 00:16:19.383 "traddr": "10.0.0.2", 00:16:19.383 "adrfam": "ipv4", 00:16:19.383 "trsvcid": "8009", 00:16:19.383 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.383 "wait_for_attach": true, 00:16:19.383 "method": "bdev_nvme_start_discovery", 00:16:19.383 "req_id": 1 00:16:19.383 } 00:16:19.383 Got JSON-RPC error response 00:16:19.383 response: 00:16:19.383 { 00:16:19.383 "code": -17, 00:16:19.383 "message": "File exists" 00:16:19.383 } 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 request: 00:16:19.383 { 00:16:19.383 "name": "nvme_second", 00:16:19.383 "trtype": "tcp", 00:16:19.383 "traddr": "10.0.0.2", 00:16:19.383 "adrfam": "ipv4", 00:16:19.383 "trsvcid": "8009", 00:16:19.383 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.383 "wait_for_attach": true, 00:16:19.383 "method": "bdev_nvme_start_discovery", 00:16:19.383 "req_id": 1 00:16:19.383 } 00:16:19.383 Got JSON-RPC error response 00:16:19.383 response: 00:16:19.383 { 00:16:19.383 "code": -17, 00:16:19.383 "message": "File exists" 00:16:19.383 } 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.383 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.384 19:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.756 [2024-07-15 19:06:47.609118] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.756 [2024-07-15 19:06:47.609172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0ee30 with addr=10.0.0.2, port=8010 00:16:20.756 [2024-07-15 19:06:47.609200] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:20.756 [2024-07-15 19:06:47.609212] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:20.756 [2024-07-15 19:06:47.609222] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:21.320 [2024-07-15 19:06:48.609165] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.320 [2024-07-15 19:06:48.609219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0ee30 with addr=10.0.0.2, port=8010 00:16:21.320 [2024-07-15 19:06:48.609245] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:21.320 [2024-07-15 19:06:48.609257] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.320 [2024-07-15 19:06:48.609267] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:22.691 [2024-07-15 19:06:49.608991] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:22.691 request: 00:16:22.691 { 00:16:22.691 "name": "nvme_second", 00:16:22.691 "trtype": "tcp", 00:16:22.691 "traddr": "10.0.0.2", 00:16:22.691 "adrfam": "ipv4", 00:16:22.691 "trsvcid": "8010", 00:16:22.691 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:22.691 "wait_for_attach": false, 00:16:22.691 "attach_timeout_ms": 3000, 00:16:22.691 "method": "bdev_nvme_start_discovery", 00:16:22.691 "req_id": 1 00:16:22.691 } 00:16:22.691 Got JSON-RPC error response 00:16:22.691 response: 00:16:22.691 { 00:16:22.691 "code": -110, 00:16:22.691 "message": "Connection timed out" 00:16:22.691 } 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76439 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.691 rmmod nvme_tcp 00:16:22.691 rmmod nvme_fabrics 00:16:22.691 rmmod nvme_keyring 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76408 ']' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76408 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76408 ']' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76408 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76408 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:22.691 killing process with pid 76408 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76408' 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76408 00:16:22.691 19:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76408 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.950 ************************************ 00:16:22.950 END TEST nvmf_host_discovery 00:16:22.950 ************************************ 00:16:22.950 00:16:22.950 real 0m10.154s 00:16:22.950 user 0m19.445s 00:16:22.950 sys 0m2.063s 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.950 19:06:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.950 19:06:50 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:22.950 19:06:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.950 19:06:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.950 19:06:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.950 ************************************ 00:16:22.950 START TEST nvmf_host_multipath_status 00:16:22.950 ************************************ 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:22.950 * Looking for test storage... 00:16:22.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.950 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.951 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.208 Cannot find device "nvmf_tgt_br" 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.208 Cannot find device "nvmf_tgt_br2" 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.208 Cannot find device "nvmf_tgt_br" 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.208 Cannot find device "nvmf_tgt_br2" 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:23.208 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.209 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.466 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.466 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.466 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.466 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:23.467 00:16:23.467 --- 10.0.0.2 ping statistics --- 00:16:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.467 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:23.467 00:16:23.467 --- 10.0.0.3 ping statistics --- 00:16:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.467 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:23.467 00:16:23.467 --- 10.0.0.1 ping statistics --- 00:16:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.467 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76893 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76893 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76893 ']' 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.467 19:06:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.467 [2024-07-15 19:06:50.643467] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:16:23.467 [2024-07-15 19:06:50.643581] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.724 [2024-07-15 19:06:50.788344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:23.724 [2024-07-15 19:06:50.919787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.724 [2024-07-15 19:06:50.919859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.724 [2024-07-15 19:06:50.919877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.724 [2024-07-15 19:06:50.919889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.724 [2024-07-15 19:06:50.919898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.724 [2024-07-15 19:06:50.920020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.724 [2024-07-15 19:06:50.920273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.724 [2024-07-15 19:06:50.977801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76893 00:16:24.657 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:24.657 [2024-07-15 19:06:51.936453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.915 19:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:25.174 Malloc0 00:16:25.174 19:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:25.432 19:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:25.690 19:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.948 [2024-07-15 19:06:53.008161] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.948 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.225 [2024-07-15 19:06:53.280357] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76943 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76943 /var/tmp/bdevperf.sock 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76943 ']' 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.225 19:06:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.175 19:06:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.175 19:06:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:27.175 19:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:27.433 19:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:27.743 Nvme0n1 00:16:27.743 19:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:28.000 Nvme0n1 00:16:28.000 19:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:28.000 19:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.557 19:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:30.557 19:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:30.557 19:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:30.557 19:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:31.490 19:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:31.490 19:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:31.490 19:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.490 19:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.748 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.748 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:31.748 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.748 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.315 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:32.574 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.574 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:32.574 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.574 19:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.832 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.832 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:32.832 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.832 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.091 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.091 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:33.091 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:33.349 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:33.607 19:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:34.984 19:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:34.984 19:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.984 19:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.984 19:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.984 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.984 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:34.984 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.984 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.244 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.244 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:35.244 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.244 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:35.503 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.503 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:35.503 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:35.503 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.761 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.761 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:35.761 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.761 19:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.020 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.020 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.020 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:36.020 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.279 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.279 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:36.279 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:36.538 19:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:36.796 19:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.174 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:38.433 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.433 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.433 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.433 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:38.692 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.692 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:38.692 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.692 19:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:38.951 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.951 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:38.951 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.951 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.210 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.210 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.210 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.210 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.468 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.468 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:39.468 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:39.731 19:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:39.995 19:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:40.930 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:40.930 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:40.930 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.930 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.188 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.188 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:41.188 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.188 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.447 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.447 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.447 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.447 19:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:42.013 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.013 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.013 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.013 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.272 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:42.530 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.530 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.531 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.531 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:42.531 19:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:42.788 19:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:43.045 19:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.423 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:44.681 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.681 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:44.681 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.681 19:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.940 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.940 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.940 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.940 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.203 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.203 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:45.203 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.203 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.471 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.471 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:45.471 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.471 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:45.730 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.730 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:45.730 19:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:45.988 19:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:46.245 19:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:47.179 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:47.179 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:47.179 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.179 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:47.437 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.437 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:47.437 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.438 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.695 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.695 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.695 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.695 19:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.953 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.953 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.953 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.953 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:48.212 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.212 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:48.212 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.212 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:48.472 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.472 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:48.472 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.472 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.731 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.731 19:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:48.989 19:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:48.989 19:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:49.247 19:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:49.505 19:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:50.436 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:50.436 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:50.436 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.436 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:50.694 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.694 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:50.694 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.694 19:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.950 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.950 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.950 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.950 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:51.207 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.207 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:51.207 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.464 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:51.464 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.464 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:51.721 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:51.721 19:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.979 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.979 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:51.979 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.979 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:52.237 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.237 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:52.238 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:52.496 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:52.755 19:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:53.690 19:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:53.690 19:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:53.690 19:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.690 19:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:53.949 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:53.949 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:53.949 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.949 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:54.208 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.208 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:54.208 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.208 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.466 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.466 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.466 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.466 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:54.724 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.724 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:54.724 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.724 19:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:54.982 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.982 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:54.982 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.982 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.239 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.239 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:55.239 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:55.497 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:55.755 19:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:56.690 19:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:56.690 19:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:56.690 19:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.690 19:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:56.947 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.947 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:56.947 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.947 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.512 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.512 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.512 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.513 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.513 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.513 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.513 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.513 19:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:57.770 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.770 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:57.770 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.770 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:58.335 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.335 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:58.335 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.335 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:58.592 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.592 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:58.592 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:58.850 19:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:59.108 19:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:00.043 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:00.043 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:00.043 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.043 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:00.300 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.300 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:00.300 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.300 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:00.557 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.557 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:00.557 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:00.557 19:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.815 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.815 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:00.815 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.815 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:01.073 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.073 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:01.073 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.073 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:01.397 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.397 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:01.397 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.397 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:01.656 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:01.656 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76943 00:17:01.656 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76943 ']' 00:17:01.656 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76943 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76943 00:17:01.914 killing process with pid 76943 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76943' 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76943 00:17:01.914 19:07:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76943 00:17:01.914 Connection closed with partial response: 00:17:01.914 00:17:01.914 00:17:02.174 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76943 00:17:02.174 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:02.174 [2024-07-15 19:06:53.355567] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:02.174 [2024-07-15 19:06:53.355693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76943 ] 00:17:02.174 [2024-07-15 19:06:53.490903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.174 [2024-07-15 19:06:53.605973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.174 [2024-07-15 19:06:53.661241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:02.174 Running I/O for 90 seconds... 00:17:02.174 [2024-07-15 19:07:10.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.174 [2024-07-15 19:07:10.013094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:02.174 [2024-07-15 19:07:10.013160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.013740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.013956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.013978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.175 [2024-07-15 19:07:10.014040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.175 [2024-07-15 19:07:10.014656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:02.175 [2024-07-15 19:07:10.014681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.014696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.014731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.014768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.014804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.014840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.014876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.014912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.014958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.014980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.014994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.176 [2024-07-15 19:07:10.015801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.015977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.015992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.176 [2024-07-15 19:07:10.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.176 [2024-07-15 19:07:10.016245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.016330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.016368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.016977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.016999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.017014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.017049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.017288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.017303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.177 [2024-07-15 19:07:10.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.177 [2024-07-15 19:07:10.019248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.177 [2024-07-15 19:07:10.019264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:10.019298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:10.019314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:10.019348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:10.019364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:10.019395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:10.019410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:10.019440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:10.019460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:10.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:10.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.181445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.181538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.181603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.182143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.182182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.182407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.182716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.182731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.184925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.184987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.185024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.185069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.178 [2024-07-15 19:07:26.185105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.185140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.185175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.178 [2024-07-15 19:07:26.185211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:02.178 [2024-07-15 19:07:26.185248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.179 [2024-07-15 19:07:26.185585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.179 [2024-07-15 19:07:26.185631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.179 [2024-07-15 19:07:26.185667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.179 [2024-07-15 19:07:26.185702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:02.179 [2024-07-15 19:07:26.185723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.179 [2024-07-15 19:07:26.185738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.179 Received shutdown signal, test time was about 33.611894 seconds 00:17:02.179 00:17:02.179 Latency(us) 00:17:02.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.179 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:02.179 Verification LBA range: start 0x0 length 0x4000 00:17:02.179 Nvme0n1 : 33.61 8351.59 32.62 0.00 0.00 15294.64 389.12 4026531.84 00:17:02.179 =================================================================================================================== 00:17:02.179 Total : 8351.59 32.62 0.00 0.00 15294.64 389.12 4026531.84 00:17:02.179 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.438 rmmod nvme_tcp 00:17:02.438 rmmod nvme_fabrics 00:17:02.438 rmmod nvme_keyring 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76893 ']' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76893 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76893 ']' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76893 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76893 00:17:02.438 killing process with pid 76893 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76893' 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76893 00:17:02.438 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76893 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.697 ************************************ 00:17:02.697 END TEST nvmf_host_multipath_status 00:17:02.697 ************************************ 00:17:02.697 00:17:02.697 real 0m39.790s 00:17:02.697 user 2m8.327s 00:17:02.697 sys 0m11.994s 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.697 19:07:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:02.697 19:07:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.697 19:07:29 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:02.697 19:07:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.697 19:07:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.697 19:07:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.697 ************************************ 00:17:02.697 START TEST nvmf_discovery_remove_ifc 00:17:02.697 ************************************ 00:17:02.697 19:07:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:02.956 * Looking for test storage... 00:17:02.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.956 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:02.957 Cannot find device "nvmf_tgt_br" 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.957 Cannot find device "nvmf_tgt_br2" 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:02.957 Cannot find device "nvmf_tgt_br" 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:02.957 Cannot find device "nvmf_tgt_br2" 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.957 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.215 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.215 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.215 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.215 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:03.216 00:17:03.216 --- 10.0.0.2 ping statistics --- 00:17:03.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.216 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:03.216 00:17:03.216 --- 10.0.0.3 ping statistics --- 00:17:03.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.216 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:03.216 00:17:03.216 --- 10.0.0.1 ping statistics --- 00:17:03.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.216 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77738 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77738 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77738 ']' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.216 19:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.216 [2024-07-15 19:07:30.460857] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:03.216 [2024-07-15 19:07:30.460953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.473 [2024-07-15 19:07:30.600066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.473 [2024-07-15 19:07:30.711478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.473 [2024-07-15 19:07:30.711574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.473 [2024-07-15 19:07:30.711604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.473 [2024-07-15 19:07:30.711619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.473 [2024-07-15 19:07:30.711627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.473 [2024-07-15 19:07:30.711663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.731 [2024-07-15 19:07:30.769023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.296 [2024-07-15 19:07:31.491014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.296 [2024-07-15 19:07:31.499127] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:04.296 null0 00:17:04.296 [2024-07-15 19:07:31.531094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.296 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77770 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77770 /tmp/host.sock 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77770 ']' 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.296 19:07:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.554 [2024-07-15 19:07:31.609459] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:04.554 [2024-07-15 19:07:31.609874] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77770 ] 00:17:04.554 [2024-07-15 19:07:31.748990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.812 [2024-07-15 19:07:31.875580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.380 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.380 [2024-07-15 19:07:32.654926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.640 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.640 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:05.640 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.640 19:07:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.598 [2024-07-15 19:07:33.704234] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:06.598 [2024-07-15 19:07:33.704270] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:06.598 [2024-07-15 19:07:33.704290] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:06.598 [2024-07-15 19:07:33.710291] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:06.598 [2024-07-15 19:07:33.767727] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:06.598 [2024-07-15 19:07:33.767954] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:06.598 [2024-07-15 19:07:33.768028] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:06.598 [2024-07-15 19:07:33.768109] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:06.598 [2024-07-15 19:07:33.768271] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.598 [2024-07-15 19:07:33.772855] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a20fd0 was disconnected and freed. delete nvme_qpair. 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.598 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.922 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:06.922 19:07:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:07.854 19:07:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:08.798 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.798 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.798 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.798 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.798 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.799 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.799 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.799 19:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.799 19:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:08.799 19:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:10.171 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:10.172 19:07:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.109 19:07:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.072 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.072 [2024-07-15 19:07:39.196684] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:12.072 [2024-07-15 19:07:39.196755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.073 [2024-07-15 19:07:39.196780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.073 [2024-07-15 19:07:39.196800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.073 [2024-07-15 19:07:39.196813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.073 [2024-07-15 19:07:39.196827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.073 [2024-07-15 19:07:39.196840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.073 [2024-07-15 19:07:39.196856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.073 [2024-07-15 19:07:39.196870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.073 [2024-07-15 19:07:39.196885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.073 [2024-07-15 19:07:39.196899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.073 [2024-07-15 19:07:39.196915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1986c60 is same with the state(5) to be set 00:17:12.073 [2024-07-15 19:07:39.206680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1986c60 (9): Bad file descriptor 00:17:12.073 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.073 19:07:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.073 [2024-07-15 19:07:39.216705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.007 [2024-07-15 19:07:40.223596] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:13.007 [2024-07-15 19:07:40.223675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1986c60 with addr=10.0.0.2, port=4420 00:17:13.007 [2024-07-15 19:07:40.223702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1986c60 is same with the state(5) to be set 00:17:13.007 [2024-07-15 19:07:40.223759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1986c60 (9): Bad file descriptor 00:17:13.007 [2024-07-15 19:07:40.224638] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:13.007 [2024-07-15 19:07:40.224699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:13.007 [2024-07-15 19:07:40.224718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:13.007 [2024-07-15 19:07:40.224747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:13.007 [2024-07-15 19:07:40.224784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:13.007 [2024-07-15 19:07:40.224805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.007 19:07:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.940 [2024-07-15 19:07:41.224873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:13.940 [2024-07-15 19:07:41.224943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:13.940 [2024-07-15 19:07:41.224956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:13.940 [2024-07-15 19:07:41.224966] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:13.940 [2024-07-15 19:07:41.224993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:13.940 [2024-07-15 19:07:41.225030] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:13.940 [2024-07-15 19:07:41.225104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.940 [2024-07-15 19:07:41.225121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.940 [2024-07-15 19:07:41.225135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.940 [2024-07-15 19:07:41.225144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.940 [2024-07-15 19:07:41.225154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.940 [2024-07-15 19:07:41.225163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.940 [2024-07-15 19:07:41.225173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.940 [2024-07-15 19:07:41.225183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.940 [2024-07-15 19:07:41.225193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:13.940 [2024-07-15 19:07:41.225202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.940 [2024-07-15 19:07:41.225211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:13.940 [2024-07-15 19:07:41.225846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198aa00 (9): Bad file descriptor 00:17:13.940 [2024-07-15 19:07:41.226856] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:13.940 [2024-07-15 19:07:41.226879] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:14.199 19:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.134 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.392 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.392 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:15.392 19:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.960 [2024-07-15 19:07:43.238850] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:15.960 [2024-07-15 19:07:43.238885] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:15.960 [2024-07-15 19:07:43.238904] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:15.960 [2024-07-15 19:07:43.244908] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:16.218 [2024-07-15 19:07:43.301417] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:16.218 [2024-07-15 19:07:43.301471] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:16.218 [2024-07-15 19:07:43.301496] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:16.218 [2024-07-15 19:07:43.301529] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:16.218 [2024-07-15 19:07:43.301539] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:16.218 [2024-07-15 19:07:43.307485] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19f62d0 was disconnected and freed. delete nvme_qpair. 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.218 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77770 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77770 ']' 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77770 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77770 00:17:16.477 killing process with pid 77770 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77770' 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77770 00:17:16.477 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77770 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.736 rmmod nvme_tcp 00:17:16.736 rmmod nvme_fabrics 00:17:16.736 rmmod nvme_keyring 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77738 ']' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77738 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77738 ']' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77738 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77738 00:17:16.736 killing process with pid 77738 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77738' 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77738 00:17:16.736 19:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77738 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:16.995 00:17:16.995 real 0m14.206s 00:17:16.995 user 0m24.631s 00:17:16.995 sys 0m2.463s 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.995 19:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.995 ************************************ 00:17:16.995 END TEST nvmf_discovery_remove_ifc 00:17:16.995 ************************************ 00:17:16.995 19:07:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.995 19:07:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:16.995 19:07:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.995 19:07:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.995 19:07:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.995 ************************************ 00:17:16.995 START TEST nvmf_identify_kernel_target 00:17:16.995 ************************************ 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:16.995 * Looking for test storage... 00:17:16.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:16.995 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.996 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.255 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:17.256 Cannot find device "nvmf_tgt_br" 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.256 Cannot find device "nvmf_tgt_br2" 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:17.256 Cannot find device "nvmf_tgt_br" 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:17.256 Cannot find device "nvmf_tgt_br2" 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.256 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:17.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:17.516 00:17:17.516 --- 10.0.0.2 ping statistics --- 00:17:17.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.516 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:17.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:17.516 00:17:17.516 --- 10.0.0.3 ping statistics --- 00:17:17.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.516 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:17.516 00:17:17.516 --- 10.0.0.1 ping statistics --- 00:17:17.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.516 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:17.516 19:07:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:17.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:18.035 Waiting for block devices as requested 00:17:18.035 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:18.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:18.035 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:18.293 No valid GPT data, bailing 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:18.293 No valid GPT data, bailing 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:18.293 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:18.294 No valid GPT data, bailing 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:18.294 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:18.553 No valid GPT data, bailing 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -a 10.0.0.1 -t tcp -s 4420 00:17:18.553 00:17:18.553 Discovery Log Number of Records 2, Generation counter 2 00:17:18.553 =====Discovery Log Entry 0====== 00:17:18.553 trtype: tcp 00:17:18.553 adrfam: ipv4 00:17:18.553 subtype: current discovery subsystem 00:17:18.553 treq: not specified, sq flow control disable supported 00:17:18.553 portid: 1 00:17:18.553 trsvcid: 4420 00:17:18.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:18.553 traddr: 10.0.0.1 00:17:18.553 eflags: none 00:17:18.553 sectype: none 00:17:18.553 =====Discovery Log Entry 1====== 00:17:18.553 trtype: tcp 00:17:18.553 adrfam: ipv4 00:17:18.553 subtype: nvme subsystem 00:17:18.553 treq: not specified, sq flow control disable supported 00:17:18.553 portid: 1 00:17:18.553 trsvcid: 4420 00:17:18.553 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:18.553 traddr: 10.0.0.1 00:17:18.553 eflags: none 00:17:18.553 sectype: none 00:17:18.553 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:18.553 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:18.814 ===================================================== 00:17:18.814 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:18.814 ===================================================== 00:17:18.814 Controller Capabilities/Features 00:17:18.814 ================================ 00:17:18.814 Vendor ID: 0000 00:17:18.814 Subsystem Vendor ID: 0000 00:17:18.814 Serial Number: 270b969d860f56a7e4d1 00:17:18.814 Model Number: Linux 00:17:18.814 Firmware Version: 6.7.0-68 00:17:18.814 Recommended Arb Burst: 0 00:17:18.814 IEEE OUI Identifier: 00 00 00 00:17:18.814 Multi-path I/O 00:17:18.814 May have multiple subsystem ports: No 00:17:18.814 May have multiple controllers: No 00:17:18.814 Associated with SR-IOV VF: No 00:17:18.814 Max Data Transfer Size: Unlimited 00:17:18.814 Max Number of Namespaces: 0 00:17:18.814 Max Number of I/O Queues: 1024 00:17:18.814 NVMe Specification Version (VS): 1.3 00:17:18.814 NVMe Specification Version (Identify): 1.3 00:17:18.814 Maximum Queue Entries: 1024 00:17:18.814 Contiguous Queues Required: No 00:17:18.814 Arbitration Mechanisms Supported 00:17:18.814 Weighted Round Robin: Not Supported 00:17:18.814 Vendor Specific: Not Supported 00:17:18.814 Reset Timeout: 7500 ms 00:17:18.814 Doorbell Stride: 4 bytes 00:17:18.814 NVM Subsystem Reset: Not Supported 00:17:18.814 Command Sets Supported 00:17:18.814 NVM Command Set: Supported 00:17:18.814 Boot Partition: Not Supported 00:17:18.814 Memory Page Size Minimum: 4096 bytes 00:17:18.814 Memory Page Size Maximum: 4096 bytes 00:17:18.814 Persistent Memory Region: Not Supported 00:17:18.814 Optional Asynchronous Events Supported 00:17:18.814 Namespace Attribute Notices: Not Supported 00:17:18.814 Firmware Activation Notices: Not Supported 00:17:18.814 ANA Change Notices: Not Supported 00:17:18.814 PLE Aggregate Log Change Notices: Not Supported 00:17:18.814 LBA Status Info Alert Notices: Not Supported 00:17:18.814 EGE Aggregate Log Change Notices: Not Supported 00:17:18.814 Normal NVM Subsystem Shutdown event: Not Supported 00:17:18.814 Zone Descriptor Change Notices: Not Supported 00:17:18.814 Discovery Log Change Notices: Supported 00:17:18.814 Controller Attributes 00:17:18.814 128-bit Host Identifier: Not Supported 00:17:18.814 Non-Operational Permissive Mode: Not Supported 00:17:18.814 NVM Sets: Not Supported 00:17:18.814 Read Recovery Levels: Not Supported 00:17:18.814 Endurance Groups: Not Supported 00:17:18.814 Predictable Latency Mode: Not Supported 00:17:18.814 Traffic Based Keep ALive: Not Supported 00:17:18.814 Namespace Granularity: Not Supported 00:17:18.814 SQ Associations: Not Supported 00:17:18.814 UUID List: Not Supported 00:17:18.814 Multi-Domain Subsystem: Not Supported 00:17:18.814 Fixed Capacity Management: Not Supported 00:17:18.814 Variable Capacity Management: Not Supported 00:17:18.814 Delete Endurance Group: Not Supported 00:17:18.814 Delete NVM Set: Not Supported 00:17:18.814 Extended LBA Formats Supported: Not Supported 00:17:18.814 Flexible Data Placement Supported: Not Supported 00:17:18.814 00:17:18.814 Controller Memory Buffer Support 00:17:18.814 ================================ 00:17:18.814 Supported: No 00:17:18.814 00:17:18.814 Persistent Memory Region Support 00:17:18.814 ================================ 00:17:18.814 Supported: No 00:17:18.814 00:17:18.814 Admin Command Set Attributes 00:17:18.814 ============================ 00:17:18.814 Security Send/Receive: Not Supported 00:17:18.814 Format NVM: Not Supported 00:17:18.814 Firmware Activate/Download: Not Supported 00:17:18.814 Namespace Management: Not Supported 00:17:18.814 Device Self-Test: Not Supported 00:17:18.814 Directives: Not Supported 00:17:18.814 NVMe-MI: Not Supported 00:17:18.814 Virtualization Management: Not Supported 00:17:18.814 Doorbell Buffer Config: Not Supported 00:17:18.814 Get LBA Status Capability: Not Supported 00:17:18.814 Command & Feature Lockdown Capability: Not Supported 00:17:18.814 Abort Command Limit: 1 00:17:18.814 Async Event Request Limit: 1 00:17:18.814 Number of Firmware Slots: N/A 00:17:18.814 Firmware Slot 1 Read-Only: N/A 00:17:18.814 Firmware Activation Without Reset: N/A 00:17:18.814 Multiple Update Detection Support: N/A 00:17:18.814 Firmware Update Granularity: No Information Provided 00:17:18.814 Per-Namespace SMART Log: No 00:17:18.814 Asymmetric Namespace Access Log Page: Not Supported 00:17:18.814 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:18.814 Command Effects Log Page: Not Supported 00:17:18.814 Get Log Page Extended Data: Supported 00:17:18.814 Telemetry Log Pages: Not Supported 00:17:18.814 Persistent Event Log Pages: Not Supported 00:17:18.814 Supported Log Pages Log Page: May Support 00:17:18.814 Commands Supported & Effects Log Page: Not Supported 00:17:18.814 Feature Identifiers & Effects Log Page:May Support 00:17:18.814 NVMe-MI Commands & Effects Log Page: May Support 00:17:18.814 Data Area 4 for Telemetry Log: Not Supported 00:17:18.814 Error Log Page Entries Supported: 1 00:17:18.814 Keep Alive: Not Supported 00:17:18.814 00:17:18.814 NVM Command Set Attributes 00:17:18.814 ========================== 00:17:18.814 Submission Queue Entry Size 00:17:18.814 Max: 1 00:17:18.814 Min: 1 00:17:18.814 Completion Queue Entry Size 00:17:18.814 Max: 1 00:17:18.814 Min: 1 00:17:18.814 Number of Namespaces: 0 00:17:18.814 Compare Command: Not Supported 00:17:18.814 Write Uncorrectable Command: Not Supported 00:17:18.814 Dataset Management Command: Not Supported 00:17:18.814 Write Zeroes Command: Not Supported 00:17:18.814 Set Features Save Field: Not Supported 00:17:18.814 Reservations: Not Supported 00:17:18.814 Timestamp: Not Supported 00:17:18.814 Copy: Not Supported 00:17:18.814 Volatile Write Cache: Not Present 00:17:18.814 Atomic Write Unit (Normal): 1 00:17:18.814 Atomic Write Unit (PFail): 1 00:17:18.814 Atomic Compare & Write Unit: 1 00:17:18.814 Fused Compare & Write: Not Supported 00:17:18.814 Scatter-Gather List 00:17:18.815 SGL Command Set: Supported 00:17:18.815 SGL Keyed: Not Supported 00:17:18.815 SGL Bit Bucket Descriptor: Not Supported 00:17:18.815 SGL Metadata Pointer: Not Supported 00:17:18.815 Oversized SGL: Not Supported 00:17:18.815 SGL Metadata Address: Not Supported 00:17:18.815 SGL Offset: Supported 00:17:18.815 Transport SGL Data Block: Not Supported 00:17:18.815 Replay Protected Memory Block: Not Supported 00:17:18.815 00:17:18.815 Firmware Slot Information 00:17:18.815 ========================= 00:17:18.815 Active slot: 0 00:17:18.815 00:17:18.815 00:17:18.815 Error Log 00:17:18.815 ========= 00:17:18.815 00:17:18.815 Active Namespaces 00:17:18.815 ================= 00:17:18.815 Discovery Log Page 00:17:18.815 ================== 00:17:18.815 Generation Counter: 2 00:17:18.815 Number of Records: 2 00:17:18.815 Record Format: 0 00:17:18.815 00:17:18.815 Discovery Log Entry 0 00:17:18.815 ---------------------- 00:17:18.815 Transport Type: 3 (TCP) 00:17:18.815 Address Family: 1 (IPv4) 00:17:18.815 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:18.815 Entry Flags: 00:17:18.815 Duplicate Returned Information: 0 00:17:18.815 Explicit Persistent Connection Support for Discovery: 0 00:17:18.815 Transport Requirements: 00:17:18.815 Secure Channel: Not Specified 00:17:18.815 Port ID: 1 (0x0001) 00:17:18.815 Controller ID: 65535 (0xffff) 00:17:18.815 Admin Max SQ Size: 32 00:17:18.815 Transport Service Identifier: 4420 00:17:18.815 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:18.815 Transport Address: 10.0.0.1 00:17:18.815 Discovery Log Entry 1 00:17:18.815 ---------------------- 00:17:18.815 Transport Type: 3 (TCP) 00:17:18.815 Address Family: 1 (IPv4) 00:17:18.815 Subsystem Type: 2 (NVM Subsystem) 00:17:18.815 Entry Flags: 00:17:18.815 Duplicate Returned Information: 0 00:17:18.815 Explicit Persistent Connection Support for Discovery: 0 00:17:18.815 Transport Requirements: 00:17:18.815 Secure Channel: Not Specified 00:17:18.815 Port ID: 1 (0x0001) 00:17:18.815 Controller ID: 65535 (0xffff) 00:17:18.815 Admin Max SQ Size: 32 00:17:18.815 Transport Service Identifier: 4420 00:17:18.815 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:18.815 Transport Address: 10.0.0.1 00:17:18.815 19:07:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:18.815 get_feature(0x01) failed 00:17:18.815 get_feature(0x02) failed 00:17:18.815 get_feature(0x04) failed 00:17:18.815 ===================================================== 00:17:18.815 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:18.815 ===================================================== 00:17:18.815 Controller Capabilities/Features 00:17:18.815 ================================ 00:17:18.815 Vendor ID: 0000 00:17:18.815 Subsystem Vendor ID: 0000 00:17:18.815 Serial Number: a2f030af171f942d7342 00:17:18.815 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:18.815 Firmware Version: 6.7.0-68 00:17:18.815 Recommended Arb Burst: 6 00:17:18.815 IEEE OUI Identifier: 00 00 00 00:17:18.815 Multi-path I/O 00:17:18.815 May have multiple subsystem ports: Yes 00:17:18.815 May have multiple controllers: Yes 00:17:18.815 Associated with SR-IOV VF: No 00:17:18.815 Max Data Transfer Size: Unlimited 00:17:18.815 Max Number of Namespaces: 1024 00:17:18.815 Max Number of I/O Queues: 128 00:17:18.815 NVMe Specification Version (VS): 1.3 00:17:18.815 NVMe Specification Version (Identify): 1.3 00:17:18.815 Maximum Queue Entries: 1024 00:17:18.815 Contiguous Queues Required: No 00:17:18.815 Arbitration Mechanisms Supported 00:17:18.815 Weighted Round Robin: Not Supported 00:17:18.815 Vendor Specific: Not Supported 00:17:18.815 Reset Timeout: 7500 ms 00:17:18.815 Doorbell Stride: 4 bytes 00:17:18.815 NVM Subsystem Reset: Not Supported 00:17:18.815 Command Sets Supported 00:17:18.815 NVM Command Set: Supported 00:17:18.815 Boot Partition: Not Supported 00:17:18.815 Memory Page Size Minimum: 4096 bytes 00:17:18.815 Memory Page Size Maximum: 4096 bytes 00:17:18.815 Persistent Memory Region: Not Supported 00:17:18.815 Optional Asynchronous Events Supported 00:17:18.815 Namespace Attribute Notices: Supported 00:17:18.815 Firmware Activation Notices: Not Supported 00:17:18.815 ANA Change Notices: Supported 00:17:18.815 PLE Aggregate Log Change Notices: Not Supported 00:17:18.815 LBA Status Info Alert Notices: Not Supported 00:17:18.815 EGE Aggregate Log Change Notices: Not Supported 00:17:18.815 Normal NVM Subsystem Shutdown event: Not Supported 00:17:18.815 Zone Descriptor Change Notices: Not Supported 00:17:18.815 Discovery Log Change Notices: Not Supported 00:17:18.815 Controller Attributes 00:17:18.815 128-bit Host Identifier: Supported 00:17:18.815 Non-Operational Permissive Mode: Not Supported 00:17:18.815 NVM Sets: Not Supported 00:17:18.815 Read Recovery Levels: Not Supported 00:17:18.815 Endurance Groups: Not Supported 00:17:18.815 Predictable Latency Mode: Not Supported 00:17:18.815 Traffic Based Keep ALive: Supported 00:17:18.815 Namespace Granularity: Not Supported 00:17:18.815 SQ Associations: Not Supported 00:17:18.815 UUID List: Not Supported 00:17:18.815 Multi-Domain Subsystem: Not Supported 00:17:18.815 Fixed Capacity Management: Not Supported 00:17:18.815 Variable Capacity Management: Not Supported 00:17:18.815 Delete Endurance Group: Not Supported 00:17:18.815 Delete NVM Set: Not Supported 00:17:18.815 Extended LBA Formats Supported: Not Supported 00:17:18.815 Flexible Data Placement Supported: Not Supported 00:17:18.815 00:17:18.815 Controller Memory Buffer Support 00:17:18.815 ================================ 00:17:18.815 Supported: No 00:17:18.815 00:17:18.815 Persistent Memory Region Support 00:17:18.815 ================================ 00:17:18.815 Supported: No 00:17:18.815 00:17:18.815 Admin Command Set Attributes 00:17:18.815 ============================ 00:17:18.815 Security Send/Receive: Not Supported 00:17:18.815 Format NVM: Not Supported 00:17:18.815 Firmware Activate/Download: Not Supported 00:17:18.815 Namespace Management: Not Supported 00:17:18.815 Device Self-Test: Not Supported 00:17:18.815 Directives: Not Supported 00:17:18.815 NVMe-MI: Not Supported 00:17:18.815 Virtualization Management: Not Supported 00:17:18.815 Doorbell Buffer Config: Not Supported 00:17:18.815 Get LBA Status Capability: Not Supported 00:17:18.815 Command & Feature Lockdown Capability: Not Supported 00:17:18.815 Abort Command Limit: 4 00:17:18.815 Async Event Request Limit: 4 00:17:18.815 Number of Firmware Slots: N/A 00:17:18.815 Firmware Slot 1 Read-Only: N/A 00:17:18.815 Firmware Activation Without Reset: N/A 00:17:18.815 Multiple Update Detection Support: N/A 00:17:18.815 Firmware Update Granularity: No Information Provided 00:17:18.815 Per-Namespace SMART Log: Yes 00:17:18.815 Asymmetric Namespace Access Log Page: Supported 00:17:18.815 ANA Transition Time : 10 sec 00:17:18.815 00:17:18.815 Asymmetric Namespace Access Capabilities 00:17:18.815 ANA Optimized State : Supported 00:17:18.815 ANA Non-Optimized State : Supported 00:17:18.815 ANA Inaccessible State : Supported 00:17:18.815 ANA Persistent Loss State : Supported 00:17:18.815 ANA Change State : Supported 00:17:18.815 ANAGRPID is not changed : No 00:17:18.815 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:18.815 00:17:18.815 ANA Group Identifier Maximum : 128 00:17:18.815 Number of ANA Group Identifiers : 128 00:17:18.815 Max Number of Allowed Namespaces : 1024 00:17:18.815 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:18.815 Command Effects Log Page: Supported 00:17:18.815 Get Log Page Extended Data: Supported 00:17:18.815 Telemetry Log Pages: Not Supported 00:17:18.815 Persistent Event Log Pages: Not Supported 00:17:18.815 Supported Log Pages Log Page: May Support 00:17:18.815 Commands Supported & Effects Log Page: Not Supported 00:17:18.815 Feature Identifiers & Effects Log Page:May Support 00:17:18.815 NVMe-MI Commands & Effects Log Page: May Support 00:17:18.815 Data Area 4 for Telemetry Log: Not Supported 00:17:18.815 Error Log Page Entries Supported: 128 00:17:18.815 Keep Alive: Supported 00:17:18.815 Keep Alive Granularity: 1000 ms 00:17:18.815 00:17:18.815 NVM Command Set Attributes 00:17:18.815 ========================== 00:17:18.815 Submission Queue Entry Size 00:17:18.815 Max: 64 00:17:18.815 Min: 64 00:17:18.815 Completion Queue Entry Size 00:17:18.815 Max: 16 00:17:18.815 Min: 16 00:17:18.815 Number of Namespaces: 1024 00:17:18.815 Compare Command: Not Supported 00:17:18.815 Write Uncorrectable Command: Not Supported 00:17:18.815 Dataset Management Command: Supported 00:17:18.816 Write Zeroes Command: Supported 00:17:18.816 Set Features Save Field: Not Supported 00:17:18.816 Reservations: Not Supported 00:17:18.816 Timestamp: Not Supported 00:17:18.816 Copy: Not Supported 00:17:18.816 Volatile Write Cache: Present 00:17:18.816 Atomic Write Unit (Normal): 1 00:17:18.816 Atomic Write Unit (PFail): 1 00:17:18.816 Atomic Compare & Write Unit: 1 00:17:18.816 Fused Compare & Write: Not Supported 00:17:18.816 Scatter-Gather List 00:17:18.816 SGL Command Set: Supported 00:17:18.816 SGL Keyed: Not Supported 00:17:18.816 SGL Bit Bucket Descriptor: Not Supported 00:17:18.816 SGL Metadata Pointer: Not Supported 00:17:18.816 Oversized SGL: Not Supported 00:17:18.816 SGL Metadata Address: Not Supported 00:17:18.816 SGL Offset: Supported 00:17:18.816 Transport SGL Data Block: Not Supported 00:17:18.816 Replay Protected Memory Block: Not Supported 00:17:18.816 00:17:18.816 Firmware Slot Information 00:17:18.816 ========================= 00:17:18.816 Active slot: 0 00:17:18.816 00:17:18.816 Asymmetric Namespace Access 00:17:18.816 =========================== 00:17:18.816 Change Count : 0 00:17:18.816 Number of ANA Group Descriptors : 1 00:17:18.816 ANA Group Descriptor : 0 00:17:18.816 ANA Group ID : 1 00:17:18.816 Number of NSID Values : 1 00:17:18.816 Change Count : 0 00:17:18.816 ANA State : 1 00:17:18.816 Namespace Identifier : 1 00:17:18.816 00:17:18.816 Commands Supported and Effects 00:17:18.816 ============================== 00:17:18.816 Admin Commands 00:17:18.816 -------------- 00:17:18.816 Get Log Page (02h): Supported 00:17:18.816 Identify (06h): Supported 00:17:18.816 Abort (08h): Supported 00:17:18.816 Set Features (09h): Supported 00:17:18.816 Get Features (0Ah): Supported 00:17:18.816 Asynchronous Event Request (0Ch): Supported 00:17:18.816 Keep Alive (18h): Supported 00:17:18.816 I/O Commands 00:17:18.816 ------------ 00:17:18.816 Flush (00h): Supported 00:17:18.816 Write (01h): Supported LBA-Change 00:17:18.816 Read (02h): Supported 00:17:18.816 Write Zeroes (08h): Supported LBA-Change 00:17:18.816 Dataset Management (09h): Supported 00:17:18.816 00:17:18.816 Error Log 00:17:18.816 ========= 00:17:18.816 Entry: 0 00:17:18.816 Error Count: 0x3 00:17:18.816 Submission Queue Id: 0x0 00:17:18.816 Command Id: 0x5 00:17:18.816 Phase Bit: 0 00:17:18.816 Status Code: 0x2 00:17:18.816 Status Code Type: 0x0 00:17:18.816 Do Not Retry: 1 00:17:18.816 Error Location: 0x28 00:17:18.816 LBA: 0x0 00:17:18.816 Namespace: 0x0 00:17:18.816 Vendor Log Page: 0x0 00:17:18.816 ----------- 00:17:18.816 Entry: 1 00:17:18.816 Error Count: 0x2 00:17:18.816 Submission Queue Id: 0x0 00:17:18.816 Command Id: 0x5 00:17:18.816 Phase Bit: 0 00:17:18.816 Status Code: 0x2 00:17:18.816 Status Code Type: 0x0 00:17:18.816 Do Not Retry: 1 00:17:18.816 Error Location: 0x28 00:17:18.816 LBA: 0x0 00:17:18.816 Namespace: 0x0 00:17:18.816 Vendor Log Page: 0x0 00:17:18.816 ----------- 00:17:18.816 Entry: 2 00:17:18.816 Error Count: 0x1 00:17:18.816 Submission Queue Id: 0x0 00:17:18.816 Command Id: 0x4 00:17:18.816 Phase Bit: 0 00:17:18.816 Status Code: 0x2 00:17:18.816 Status Code Type: 0x0 00:17:18.816 Do Not Retry: 1 00:17:18.816 Error Location: 0x28 00:17:18.816 LBA: 0x0 00:17:18.816 Namespace: 0x0 00:17:18.816 Vendor Log Page: 0x0 00:17:18.816 00:17:18.816 Number of Queues 00:17:18.816 ================ 00:17:18.816 Number of I/O Submission Queues: 128 00:17:18.816 Number of I/O Completion Queues: 128 00:17:18.816 00:17:18.816 ZNS Specific Controller Data 00:17:18.816 ============================ 00:17:18.816 Zone Append Size Limit: 0 00:17:18.816 00:17:18.816 00:17:18.816 Active Namespaces 00:17:18.816 ================= 00:17:18.816 get_feature(0x05) failed 00:17:18.816 Namespace ID:1 00:17:18.816 Command Set Identifier: NVM (00h) 00:17:18.816 Deallocate: Supported 00:17:18.816 Deallocated/Unwritten Error: Not Supported 00:17:18.816 Deallocated Read Value: Unknown 00:17:18.816 Deallocate in Write Zeroes: Not Supported 00:17:18.816 Deallocated Guard Field: 0xFFFF 00:17:18.816 Flush: Supported 00:17:18.816 Reservation: Not Supported 00:17:18.816 Namespace Sharing Capabilities: Multiple Controllers 00:17:18.816 Size (in LBAs): 1310720 (5GiB) 00:17:18.816 Capacity (in LBAs): 1310720 (5GiB) 00:17:18.816 Utilization (in LBAs): 1310720 (5GiB) 00:17:18.816 UUID: f0616d0c-d28f-4a29-9294-b4dc397ceb5b 00:17:18.816 Thin Provisioning: Not Supported 00:17:18.816 Per-NS Atomic Units: Yes 00:17:18.816 Atomic Boundary Size (Normal): 0 00:17:18.816 Atomic Boundary Size (PFail): 0 00:17:18.816 Atomic Boundary Offset: 0 00:17:18.816 NGUID/EUI64 Never Reused: No 00:17:18.816 ANA group ID: 1 00:17:18.816 Namespace Write Protected: No 00:17:18.816 Number of LBA Formats: 1 00:17:18.816 Current LBA Format: LBA Format #00 00:17:18.816 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:18.816 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.816 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.075 rmmod nvme_tcp 00:17:19.075 rmmod nvme_fabrics 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:19.075 19:07:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:19.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:19.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:19.934 00:17:19.934 real 0m2.933s 00:17:19.934 user 0m0.993s 00:17:19.934 sys 0m1.400s 00:17:19.934 19:07:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.934 ************************************ 00:17:19.934 END TEST nvmf_identify_kernel_target 00:17:19.934 19:07:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.934 ************************************ 00:17:19.934 19:07:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.934 19:07:47 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:19.934 19:07:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.934 19:07:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.934 19:07:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.934 ************************************ 00:17:19.934 START TEST nvmf_auth_host 00:17:19.934 ************************************ 00:17:19.934 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:20.193 * Looking for test storage... 00:17:20.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.193 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.194 Cannot find device "nvmf_tgt_br" 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.194 Cannot find device "nvmf_tgt_br2" 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.194 Cannot find device "nvmf_tgt_br" 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.194 Cannot find device "nvmf_tgt_br2" 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.194 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:20.453 00:17:20.453 --- 10.0.0.2 ping statistics --- 00:17:20.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.453 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:20.453 00:17:20.453 --- 10.0.0.3 ping statistics --- 00:17:20.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.453 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:20.453 00:17:20.453 --- 10.0.0.1 ping statistics --- 00:17:20.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.453 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.453 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78657 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78657 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78657 ']' 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.454 19:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8e651413e8dde16fd8259aefc76dcfd 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gu6 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8e651413e8dde16fd8259aefc76dcfd 0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8e651413e8dde16fd8259aefc76dcfd 0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8e651413e8dde16fd8259aefc76dcfd 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gu6 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gu6 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gu6 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eeacf1ed704610e190371f80431d8f976fef408cbc7869f325b127242ba6d286 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yKR 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eeacf1ed704610e190371f80431d8f976fef408cbc7869f325b127242ba6d286 3 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eeacf1ed704610e190371f80431d8f976fef408cbc7869f325b127242ba6d286 3 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eeacf1ed704610e190371f80431d8f976fef408cbc7869f325b127242ba6d286 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yKR 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yKR 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.yKR 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c15a619f3739703fba608b2fd86167a0025d184ab7af086f 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VP5 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c15a619f3739703fba608b2fd86167a0025d184ab7af086f 0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c15a619f3739703fba608b2fd86167a0025d184ab7af086f 0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c15a619f3739703fba608b2fd86167a0025d184ab7af086f 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:21.831 19:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VP5 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VP5 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.VP5 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cbbaa83eee7d11633f6de46d0a559763f99b43a88585f7da 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hv1 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cbbaa83eee7d11633f6de46d0a559763f99b43a88585f7da 2 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cbbaa83eee7d11633f6de46d0a559763f99b43a88585f7da 2 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cbbaa83eee7d11633f6de46d0a559763f99b43a88585f7da 00:17:21.831 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:21.832 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.832 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hv1 00:17:21.832 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hv1 00:17:21.832 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Hv1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6a6fbbc2b68fbf12ed773d4bac69154 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dKL 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6a6fbbc2b68fbf12ed773d4bac69154 1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6a6fbbc2b68fbf12ed773d4bac69154 1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6a6fbbc2b68fbf12ed773d4bac69154 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dKL 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dKL 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dKL 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7e6a7320c5d3c401fc8f471e620ab8e3 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7sv 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7e6a7320c5d3c401fc8f471e620ab8e3 1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7e6a7320c5d3c401fc8f471e620ab8e3 1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7e6a7320c5d3c401fc8f471e620ab8e3 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7sv 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7sv 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7sv 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=479ac5ad3a5e6e35fb78a9d9984a7cee5ef78b75c0641525 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.H7r 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 479ac5ad3a5e6e35fb78a9d9984a7cee5ef78b75c0641525 2 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 479ac5ad3a5e6e35fb78a9d9984a7cee5ef78b75c0641525 2 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=479ac5ad3a5e6e35fb78a9d9984a7cee5ef78b75c0641525 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.H7r 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.H7r 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.H7r 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=418d306abd283d6f9ddcd158ed23fda2 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.omT 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 418d306abd283d6f9ddcd158ed23fda2 0 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 418d306abd283d6f9ddcd158ed23fda2 0 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=418d306abd283d6f9ddcd158ed23fda2 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:22.091 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.omT 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.omT 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.omT 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4e903896b27249a7e35f474e814e54e787a96456ec13e2342a8ecbe9b4f4a33 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qn2 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4e903896b27249a7e35f474e814e54e787a96456ec13e2342a8ecbe9b4f4a33 3 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4e903896b27249a7e35f474e814e54e787a96456ec13e2342a8ecbe9b4f4a33 3 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4e903896b27249a7e35f474e814e54e787a96456ec13e2342a8ecbe9b4f4a33 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qn2 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qn2 00:17:22.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.350 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qn2 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78657 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78657 ']' 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.351 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gu6 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.yKR ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yKR 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VP5 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Hv1 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hv1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dKL 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7sv ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7sv 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.H7r 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.omT ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.omT 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qn2 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:22.610 19:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:23.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:23.177 Waiting for block devices as requested 00:17:23.177 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:23.177 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:23.744 19:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:23.744 19:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:23.744 19:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:23.744 19:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:23.744 19:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:23.745 19:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:23.745 19:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:23.745 19:07:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:23.745 19:07:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:23.745 No valid GPT data, bailing 00:17:23.745 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:24.002 No valid GPT data, bailing 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:24.002 No valid GPT data, bailing 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:24.002 No valid GPT data, bailing 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:24.002 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -a 10.0.0.1 -t tcp -s 4420 00:17:24.261 00:17:24.261 Discovery Log Number of Records 2, Generation counter 2 00:17:24.261 =====Discovery Log Entry 0====== 00:17:24.261 trtype: tcp 00:17:24.261 adrfam: ipv4 00:17:24.261 subtype: current discovery subsystem 00:17:24.261 treq: not specified, sq flow control disable supported 00:17:24.261 portid: 1 00:17:24.261 trsvcid: 4420 00:17:24.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:24.261 traddr: 10.0.0.1 00:17:24.261 eflags: none 00:17:24.261 sectype: none 00:17:24.261 =====Discovery Log Entry 1====== 00:17:24.261 trtype: tcp 00:17:24.261 adrfam: ipv4 00:17:24.261 subtype: nvme subsystem 00:17:24.261 treq: not specified, sq flow control disable supported 00:17:24.261 portid: 1 00:17:24.261 trsvcid: 4420 00:17:24.261 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:24.261 traddr: 10.0.0.1 00:17:24.261 eflags: none 00:17:24.261 sectype: none 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.261 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 nvme0n1 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 nvme0n1 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.520 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.779 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.779 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.779 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.779 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 nvme0n1 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.780 19:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.780 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.039 nvme0n1 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:25.039 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.040 nvme0n1 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.040 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 nvme0n1 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.298 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.299 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.557 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.815 nvme0n1 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.815 19:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.815 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.073 nvme0n1 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.073 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 nvme0n1 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 nvme0n1 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.330 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.331 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 nvme0n1 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 19:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.589 19:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.153 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.154 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 nvme0n1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.453 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.738 nvme0n1 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.738 19:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.011 nvme0n1 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.011 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 nvme0n1 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.268 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.269 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.526 nvme0n1 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.526 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.527 19:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.467 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.725 nvme0n1 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.725 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.726 19:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.292 nvme0n1 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.292 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.293 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.293 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.293 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.293 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.293 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.551 nvme0n1 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.551 19:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 nvme0n1 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.119 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.120 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 nvme0n1 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.379 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.638 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.639 19:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.206 nvme0n1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.206 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.207 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.773 nvme0n1 00:17:33.773 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.773 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.773 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.773 19:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.773 19:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.773 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.031 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 nvme0n1 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.596 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.597 19:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.597 19:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.597 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.597 19:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.164 nvme0n1 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.164 19:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 nvme0n1 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.115 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.116 nvme0n1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.116 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 nvme0n1 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 nvme0n1 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.403 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 nvme0n1 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 nvme0n1 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.662 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.921 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.921 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.921 19:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.921 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.921 19:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.921 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.922 nvme0n1 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.922 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.181 nvme0n1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.181 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.440 nvme0n1 00:17:37.440 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.440 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.441 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.700 nvme0n1 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:37.700 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.701 nvme0n1 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.701 19:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 nvme0n1 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.960 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 nvme0n1 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.220 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.479 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.480 nvme0n1 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.480 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.740 19:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.000 nvme0n1 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.000 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 nvme0n1 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.520 nvme0n1 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.520 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.779 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.780 19:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 nvme0n1 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.039 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 nvme0n1 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.606 19:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 nvme0n1 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.866 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.433 nvme0n1 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.433 19:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.000 nvme0n1 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.000 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.001 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 nvme0n1 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.935 19:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 nvme0n1 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.504 19:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.070 nvme0n1 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.070 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.637 nvme0n1 00:17:44.637 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.637 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.637 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.637 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.637 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:44.895 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 nvme0n1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.896 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.155 nvme0n1 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.155 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 nvme0n1 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 nvme0n1 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.414 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 nvme0n1 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.673 19:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 nvme0n1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.931 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 nvme0n1 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 nvme0n1 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.189 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 nvme0n1 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.447 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 nvme0n1 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.705 19:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.963 nvme0n1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.963 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 nvme0n1 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.221 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.480 nvme0n1 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.480 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.738 nvme0n1 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.738 19:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.738 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.739 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.739 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.739 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.739 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.739 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.997 nvme0n1 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.997 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.998 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.255 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 nvme0n1 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.513 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.514 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.514 19:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.514 19:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.514 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.514 19:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.772 nvme0n1 00:17:48.772 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.772 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.772 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.772 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.772 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.031 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.290 nvme0n1 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.290 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 nvme0n1 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.857 19:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.116 nvme0n1 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhlNjUxNDEzZThkZGUxNmZkODI1OWFlZmM3NmRjZmS1QfCE: 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: ]] 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWVhY2YxZWQ3MDQ2MTBlMTkwMzcxZjgwNDMxZDhmOTc2ZmVmNDA4Y2JjNzg2OWYzMjViMTI3MjQyYmE2ZDI4Nlo1qvg=: 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.116 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.386 19:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 nvme0n1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.951 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 nvme0n1 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZhNmZiYmMyYjY4ZmJmMTJlZDc3M2Q0YmFjNjkxNTQ+ho29: 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2U2YTczMjBjNWQzYzQwMWZjOGY0NzFlNjIwYWI4ZTOkXDq4: 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.518 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.519 19:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 nvme0n1 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc5YWM1YWQzYTVlNmUzNWZiNzhhOWQ5OTg0YTdjZWU1ZWY3OGI3NWMwNjQxNTI1t1IQOA==: 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: ]] 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE4ZDMwNmFiZDI4M2Q2ZjlkZGNkMTU4ZWQyM2ZkYTIZ+WO3: 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:52.453 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.454 19:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 nvme0n1 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRlOTAzODk2YjI3MjQ5YTdlMzVmNDc0ZTgxNGU1NGU3ODdhOTY0NTZlYzEzZTIzNDJhOGVjYmU5YjRmNGEzM98zvuQ=: 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.022 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 nvme0n1 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzE1YTYxOWYzNzM5NzAzZmJhNjA4YjJmZDg2MTY3YTAwMjVkMTg0YWI3YWYwODZmDvbQWg==: 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: ]] 00:17:53.590 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JiYWE4M2VlZTdkMTE2MzNmNmRlNDZkMGE1NTk3NjNmOTliNDNhODg1ODVmN2Rhz4rvTQ==: 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 request: 00:17:53.591 { 00:17:53.591 "name": "nvme0", 00:17:53.591 "trtype": "tcp", 00:17:53.591 "traddr": "10.0.0.1", 00:17:53.591 "adrfam": "ipv4", 00:17:53.591 "trsvcid": "4420", 00:17:53.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.591 "prchk_reftag": false, 00:17:53.591 "prchk_guard": false, 00:17:53.591 "hdgst": false, 00:17:53.591 "ddgst": false, 00:17:53.591 "method": "bdev_nvme_attach_controller", 00:17:53.591 "req_id": 1 00:17:53.591 } 00:17:53.591 Got JSON-RPC error response 00:17:53.591 response: 00:17:53.591 { 00:17:53.591 "code": -5, 00:17:53.591 "message": "Input/output error" 00:17:53.591 } 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.591 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.851 request: 00:17:53.851 { 00:17:53.851 "name": "nvme0", 00:17:53.851 "trtype": "tcp", 00:17:53.851 "traddr": "10.0.0.1", 00:17:53.851 "adrfam": "ipv4", 00:17:53.851 "trsvcid": "4420", 00:17:53.851 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.851 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.851 "prchk_reftag": false, 00:17:53.851 "prchk_guard": false, 00:17:53.851 "hdgst": false, 00:17:53.851 "ddgst": false, 00:17:53.851 "dhchap_key": "key2", 00:17:53.851 "method": "bdev_nvme_attach_controller", 00:17:53.851 "req_id": 1 00:17:53.851 } 00:17:53.851 Got JSON-RPC error response 00:17:53.851 response: 00:17:53.851 { 00:17:53.851 "code": -5, 00:17:53.851 "message": "Input/output error" 00:17:53.851 } 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.851 19:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.851 request: 00:17:53.851 { 00:17:53.851 "name": "nvme0", 00:17:53.851 "trtype": "tcp", 00:17:53.851 "traddr": "10.0.0.1", 00:17:53.851 "adrfam": "ipv4", 00:17:53.851 "trsvcid": "4420", 00:17:53.851 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.851 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.851 "prchk_reftag": false, 00:17:53.851 "prchk_guard": false, 00:17:53.851 "hdgst": false, 00:17:53.851 "ddgst": false, 00:17:53.851 "dhchap_key": "key1", 00:17:53.851 "dhchap_ctrlr_key": "ckey2", 00:17:53.851 "method": "bdev_nvme_attach_controller", 00:17:53.851 "req_id": 1 00:17:53.851 } 00:17:53.851 Got JSON-RPC error response 00:17:53.851 response: 00:17:53.851 { 00:17:53.851 "code": -5, 00:17:53.851 "message": "Input/output error" 00:17:53.851 } 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:53.851 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.852 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.852 rmmod nvme_tcp 00:17:53.852 rmmod nvme_fabrics 00:17:53.852 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78657 ']' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78657 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78657 ']' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78657 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78657 00:17:54.110 killing process with pid 78657 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78657' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78657 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78657 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.110 19:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:54.370 19:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:54.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.199 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.199 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.199 19:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gu6 /tmp/spdk.key-null.VP5 /tmp/spdk.key-sha256.dKL /tmp/spdk.key-sha384.H7r /tmp/spdk.key-sha512.qn2 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:55.199 19:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:55.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.458 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.458 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.740 00:17:55.740 real 0m35.587s 00:17:55.740 user 0m32.159s 00:17:55.740 sys 0m3.837s 00:17:55.740 ************************************ 00:17:55.740 19:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.740 19:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 END TEST nvmf_auth_host 00:17:55.740 ************************************ 00:17:55.740 19:08:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.740 19:08:22 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:55.740 19:08:22 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:55.740 19:08:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.740 19:08:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.740 19:08:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.740 ************************************ 00:17:55.740 START TEST nvmf_digest 00:17:55.740 ************************************ 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:55.740 * Looking for test storage... 00:17:55.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.740 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:55.741 Cannot find device "nvmf_tgt_br" 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.741 Cannot find device "nvmf_tgt_br2" 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:55.741 19:08:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:55.741 Cannot find device "nvmf_tgt_br" 00:17:55.741 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:55.741 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:56.005 Cannot find device "nvmf_tgt_br2" 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:56.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:17:56.005 00:17:56.005 --- 10.0.0.2 ping statistics --- 00:17:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.005 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:56.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:56.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:56.005 00:17:56.005 --- 10.0.0.3 ping statistics --- 00:17:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.005 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:56.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:56.005 00:17:56.005 --- 10.0.0.1 ping statistics --- 00:17:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.005 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:56.005 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.006 19:08:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 ************************************ 00:17:56.266 START TEST nvmf_digest_clean 00:17:56.266 ************************************ 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80224 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80224 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80224 ']' 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.266 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.267 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.267 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.267 19:08:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.267 [2024-07-15 19:08:23.376867] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:56.267 [2024-07-15 19:08:23.376992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.267 [2024-07-15 19:08:23.518809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.525 [2024-07-15 19:08:23.645477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.525 [2024-07-15 19:08:23.645613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.525 [2024-07-15 19:08:23.645627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.525 [2024-07-15 19:08:23.645635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.525 [2024-07-15 19:08:23.645642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.525 [2024-07-15 19:08:23.645675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.091 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.349 [2024-07-15 19:08:24.436810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.349 null0 00:17:57.349 [2024-07-15 19:08:24.489467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.349 [2024-07-15 19:08:24.513615] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.349 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80262 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80262 /var/tmp/bperf.sock 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80262 ']' 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.350 19:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.350 [2024-07-15 19:08:24.607142] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:17:57.350 [2024-07-15 19:08:24.607648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80262 ] 00:17:57.609 [2024-07-15 19:08:24.760416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.867 [2024-07-15 19:08:24.919374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.434 19:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.434 19:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:58.434 19:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:58.434 19:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:58.434 19:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:59.003 [2024-07-15 19:08:25.998841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:59.003 19:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.003 19:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.262 nvme0n1 00:17:59.262 19:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:59.262 19:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.262 Running I/O for 2 seconds... 00:18:01.794 00:18:01.794 Latency(us) 00:18:01.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.794 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:01.794 nvme0n1 : 2.01 14926.21 58.31 0.00 0.00 8569.41 7447.27 17992.61 00:18:01.794 =================================================================================================================== 00:18:01.794 Total : 14926.21 58.31 0.00 0.00 8569.41 7447.27 17992.61 00:18:01.794 0 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:01.794 | select(.opcode=="crc32c") 00:18:01.794 | "\(.module_name) \(.executed)"' 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80262 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80262 ']' 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80262 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80262 00:18:01.794 killing process with pid 80262 00:18:01.794 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.794 00:18:01.794 Latency(us) 00:18:01.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.794 =================================================================================================================== 00:18:01.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80262' 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80262 00:18:01.794 19:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80262 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80321 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80321 /var/tmp/bperf.sock 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80321 ']' 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.794 19:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.053 Zero copy mechanism will not be used. 00:18:02.053 [2024-07-15 19:08:29.102070] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:02.053 [2024-07-15 19:08:29.102154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80321 ] 00:18:02.053 [2024-07-15 19:08:29.236370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.311 [2024-07-15 19:08:29.347431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.877 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.877 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:02.877 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:02.877 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:02.877 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:03.136 [2024-07-15 19:08:30.389829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:03.395 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.395 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.658 nvme0n1 00:18:03.658 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:03.658 19:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.658 Zero copy mechanism will not be used. 00:18:03.658 Running I/O for 2 seconds... 00:18:06.187 00:18:06.187 Latency(us) 00:18:06.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.187 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:06.187 nvme0n1 : 2.00 7358.55 919.82 0.00 0.00 2170.65 1861.82 7060.01 00:18:06.187 =================================================================================================================== 00:18:06.187 Total : 7358.55 919.82 0.00 0.00 2170.65 1861.82 7060.01 00:18:06.187 0 00:18:06.187 19:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:06.187 19:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:06.187 19:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:06.187 19:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:06.187 | select(.opcode=="crc32c") 00:18:06.187 | "\(.module_name) \(.executed)"' 00:18:06.187 19:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80321 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80321 ']' 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80321 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80321 00:18:06.187 killing process with pid 80321 00:18:06.187 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.187 00:18:06.187 Latency(us) 00:18:06.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.187 =================================================================================================================== 00:18:06.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80321' 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80321 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80321 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80377 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80377 /var/tmp/bperf.sock 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80377 ']' 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.187 19:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.444 [2024-07-15 19:08:33.519388] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:06.444 [2024-07-15 19:08:33.519558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80377 ] 00:18:06.444 [2024-07-15 19:08:33.661073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.701 [2024-07-15 19:08:33.789378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.266 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.266 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:07.266 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:07.266 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:07.266 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:07.524 [2024-07-15 19:08:34.799787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.784 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.784 19:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.056 nvme0n1 00:18:08.056 19:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:08.056 19:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:08.056 Running I/O for 2 seconds... 00:18:10.590 00:18:10.590 Latency(us) 00:18:10.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.590 nvme0n1 : 2.01 16184.90 63.22 0.00 0.00 7901.78 6285.50 15609.48 00:18:10.590 =================================================================================================================== 00:18:10.590 Total : 16184.90 63.22 0.00 0.00 7901.78 6285.50 15609.48 00:18:10.590 0 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:10.590 | select(.opcode=="crc32c") 00:18:10.590 | "\(.module_name) \(.executed)"' 00:18:10.590 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80377 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80377 ']' 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80377 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80377 00:18:10.591 killing process with pid 80377 00:18:10.591 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.591 00:18:10.591 Latency(us) 00:18:10.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.591 =================================================================================================================== 00:18:10.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80377' 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80377 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80377 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80443 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80443 /var/tmp/bperf.sock 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80443 ']' 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.591 19:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:10.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.850 Zero copy mechanism will not be used. 00:18:10.850 [2024-07-15 19:08:37.929702] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:10.850 [2024-07-15 19:08:37.929796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80443 ] 00:18:10.850 [2024-07-15 19:08:38.063001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.108 [2024-07-15 19:08:38.171230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.714 19:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.714 19:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:11.714 19:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:11.714 19:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:11.714 19:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:11.973 [2024-07-15 19:08:39.148656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:11.973 19:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.973 19:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.540 nvme0n1 00:18:12.540 19:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:12.540 19:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.540 Zero copy mechanism will not be used. 00:18:12.540 Running I/O for 2 seconds... 00:18:14.440 00:18:14.440 Latency(us) 00:18:14.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.440 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:14.440 nvme0n1 : 2.00 6385.01 798.13 0.00 0.00 2500.15 1474.56 3932.16 00:18:14.440 =================================================================================================================== 00:18:14.440 Total : 6385.01 798.13 0.00 0.00 2500.15 1474.56 3932.16 00:18:14.440 0 00:18:14.440 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:14.440 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:14.440 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:14.440 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:14.440 | select(.opcode=="crc32c") 00:18:14.440 | "\(.module_name) \(.executed)"' 00:18:14.440 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:15.006 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80443 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80443 ']' 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80443 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.007 19:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80443 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:15.007 killing process with pid 80443 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80443' 00:18:15.007 Received shutdown signal, test time was about 2.000000 seconds 00:18:15.007 00:18:15.007 Latency(us) 00:18:15.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.007 =================================================================================================================== 00:18:15.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80443 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80443 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80224 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80224 ']' 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80224 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80224 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:15.007 killing process with pid 80224 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80224' 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80224 00:18:15.007 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80224 00:18:15.265 00:18:15.265 real 0m19.181s 00:18:15.265 user 0m37.440s 00:18:15.265 sys 0m4.880s 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:15.265 ************************************ 00:18:15.265 END TEST nvmf_digest_clean 00:18:15.265 ************************************ 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:15.265 19:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:15.266 ************************************ 00:18:15.266 START TEST nvmf_digest_error 00:18:15.266 ************************************ 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80526 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80526 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80526 ']' 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.266 19:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.523 [2024-07-15 19:08:42.602916] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:15.523 [2024-07-15 19:08:42.603012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.523 [2024-07-15 19:08:42.736005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.781 [2024-07-15 19:08:42.846668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.781 [2024-07-15 19:08:42.846726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.781 [2024-07-15 19:08:42.846738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.781 [2024-07-15 19:08:42.846746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.781 [2024-07-15 19:08:42.846754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.781 [2024-07-15 19:08:42.846778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.347 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.347 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:16.347 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.347 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.347 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.348 [2024-07-15 19:08:43.631256] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.348 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.605 [2024-07-15 19:08:43.695314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.605 null0 00:18:16.605 [2024-07-15 19:08:43.743721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.606 [2024-07-15 19:08:43.767827] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80558 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80558 /var/tmp/bperf.sock 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80558 ']' 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.606 19:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.606 [2024-07-15 19:08:43.832670] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:16.606 [2024-07-15 19:08:43.832782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80558 ] 00:18:16.865 [2024-07-15 19:08:43.976147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.865 [2024-07-15 19:08:44.108670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.123 [2024-07-15 19:08:44.168393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.771 19:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.771 19:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:17.771 19:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.771 19:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.771 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.337 nvme0n1 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:18.337 19:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:18.337 Running I/O for 2 seconds... 00:18:18.337 [2024-07-15 19:08:45.560887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.337 [2024-07-15 19:08:45.560951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.337 [2024-07-15 19:08:45.560968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.337 [2024-07-15 19:08:45.577814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.337 [2024-07-15 19:08:45.577856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.337 [2024-07-15 19:08:45.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.337 [2024-07-15 19:08:45.594663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.337 [2024-07-15 19:08:45.594704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.337 [2024-07-15 19:08:45.594718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.337 [2024-07-15 19:08:45.611484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.337 [2024-07-15 19:08:45.611533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.337 [2024-07-15 19:08:45.611547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.628414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.628457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.628471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.645316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.645369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.645384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.662338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.662388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.662403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.679239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.679287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.679302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.696181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.696230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.696244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.713061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.713115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.713129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.729960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.730008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.730022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.746803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.746845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.746859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.763633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.763682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.763696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.780592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.780638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.780651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.797399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.797448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.797462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.814257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.814324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.814339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.831409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.831476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.831491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.848846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.848916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.848931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.866407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.596 [2024-07-15 19:08:45.883861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.596 [2024-07-15 19:08:45.883929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.596 [2024-07-15 19:08:45.883945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.901011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.901053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.901067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.918020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.918067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.918082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.934976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.935027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.935042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.951879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.951938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.951953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.969300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.969364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.969379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:45.986603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:45.986668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:45.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.003852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.003933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.021152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.021231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.021245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.038330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.038391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.038407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.055473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.055546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.055561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.072414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.072458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.072472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.089511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.855 [2024-07-15 19:08:46.089568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.855 [2024-07-15 19:08:46.089584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.855 [2024-07-15 19:08:46.106796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.856 [2024-07-15 19:08:46.106868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.856 [2024-07-15 19:08:46.106883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.856 [2024-07-15 19:08:46.123956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.856 [2024-07-15 19:08:46.124016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.856 [2024-07-15 19:08:46.124031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.856 [2024-07-15 19:08:46.142667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:18.856 [2024-07-15 19:08:46.142748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.856 [2024-07-15 19:08:46.142762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.159846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.159887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.159901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.176740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.176779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.176793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.193826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.193864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.193877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.210803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.210841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.210855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.227629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.227677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.227700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.244587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.244642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.244657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.261454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.261511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.278335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.278375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.278389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.295175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.295216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.295230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.311981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.312020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.312034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.328801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.328839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.328853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.345643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.345681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.345694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.362568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.362607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.362638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.114 [2024-07-15 19:08:46.379434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.114 [2024-07-15 19:08:46.379472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.114 [2024-07-15 19:08:46.379486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.115 [2024-07-15 19:08:46.396351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.115 [2024-07-15 19:08:46.396403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.115 [2024-07-15 19:08:46.396418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.373 [2024-07-15 19:08:46.413323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.373 [2024-07-15 19:08:46.413370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.373 [2024-07-15 19:08:46.413385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.373 [2024-07-15 19:08:46.430162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.373 [2024-07-15 19:08:46.430203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.373 [2024-07-15 19:08:46.430217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.373 [2024-07-15 19:08:46.447044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.373 [2024-07-15 19:08:46.447096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.373 [2024-07-15 19:08:46.447111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.373 [2024-07-15 19:08:46.464031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.464089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.464103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.480867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.480911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.480925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.497712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.497755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.497779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.514645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.514708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.514723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.531673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.531736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.531751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.548733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.548819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.565808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.565878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.565894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.582825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.582880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.582895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.599638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.599677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.599690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.616419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.616456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.616469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.640512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.640549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.640563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.374 [2024-07-15 19:08:46.657347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.374 [2024-07-15 19:08:46.657385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.374 [2024-07-15 19:08:46.657398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.674157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.674196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.690930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.690967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.690980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.707738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.707776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.707790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.724534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.724575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.724589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.741448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.741495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.741521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.758491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.758567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.758583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.775422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.775482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.792452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.792495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.792523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.809389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.809428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.809441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.826292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.826330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.826343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.843209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.843249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.843263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.860079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.860121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.876986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.877029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.877042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.893863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.893901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.893914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.633 [2024-07-15 19:08:46.910714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.633 [2024-07-15 19:08:46.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.633 [2024-07-15 19:08:46.910767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:46.927750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:46.927791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:46.927806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:46.944771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:46.944810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:46.944824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:46.961687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:46.961725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:46.961738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:46.978734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:46.978777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:46.978792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:46.995706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:46.995748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:46.995763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.012625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.012664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.012679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.029542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.029580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.029594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.046494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.046575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.063604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.063668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.080631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.080687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.080710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.097533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.097572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.097586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.114452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.114496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.114522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.131568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.131614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.148541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.148594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.892 [2024-07-15 19:08:47.165464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:19.892 [2024-07-15 19:08:47.165512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.892 [2024-07-15 19:08:47.165528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.182542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.182611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.182626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.199601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.199667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.199682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.216542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.216582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.233556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.233598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.233612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.250632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.250674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.250688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.267476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.267527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.267541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.284345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.150 [2024-07-15 19:08:47.284387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.150 [2024-07-15 19:08:47.284401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.150 [2024-07-15 19:08:47.301265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.301306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.301320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.318178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.318221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.318242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.335189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.335231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.335245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.352061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.352100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.352114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.368943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.368985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.368998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.385893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.385937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.402913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.402957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.402972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.419833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.419876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.419890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.151 [2024-07-15 19:08:47.436762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.151 [2024-07-15 19:08:47.436804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.151 [2024-07-15 19:08:47.436818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.453714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.453755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.453769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.470599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.470639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.470653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.487605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.487652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.487667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.504755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.504831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.504851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.521853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 [2024-07-15 19:08:47.538310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1279fc0) 00:18:20.422 [2024-07-15 19:08:47.538352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.422 [2024-07-15 19:08:47.538366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.422 00:18:20.422 Latency(us) 00:18:20.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.422 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:20.422 nvme0n1 : 2.00 14904.06 58.22 0.00 0.00 8581.37 7923.90 32648.84 00:18:20.422 =================================================================================================================== 00:18:20.422 Total : 14904.06 58.22 0.00 0.00 8581.37 7923.90 32648.84 00:18:20.423 0 00:18:20.423 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:20.423 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:20.423 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:20.423 | .driver_specific 00:18:20.423 | .nvme_error 00:18:20.423 | .status_code 00:18:20.423 | .command_transient_transport_error' 00:18:20.423 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80558 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80558 ']' 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80558 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80558 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.681 killing process with pid 80558 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80558' 00:18:20.681 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.681 00:18:20.681 Latency(us) 00:18:20.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.681 =================================================================================================================== 00:18:20.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80558 00:18:20.681 19:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80558 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80619 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80619 /var/tmp/bperf.sock 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80619 ']' 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.940 19:08:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.941 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.941 Zero copy mechanism will not be used. 00:18:20.941 [2024-07-15 19:08:48.164340] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:20.941 [2024-07-15 19:08:48.164441] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80619 ] 00:18:21.199 [2024-07-15 19:08:48.303241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.199 [2024-07-15 19:08:48.416783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.199 [2024-07-15 19:08:48.470297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:22.135 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.135 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:22.135 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.135 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.395 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.654 nvme0n1 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:22.654 19:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:22.914 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.914 Zero copy mechanism will not be used. 00:18:22.914 Running I/O for 2 seconds... 00:18:22.914 [2024-07-15 19:08:50.015766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.914 [2024-07-15 19:08:50.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.914 [2024-07-15 19:08:50.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.914 [2024-07-15 19:08:50.020184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.914 [2024-07-15 19:08:50.020227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.914 [2024-07-15 19:08:50.020241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.914 [2024-07-15 19:08:50.024689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.914 [2024-07-15 19:08:50.024740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.914 [2024-07-15 19:08:50.024755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.914 [2024-07-15 19:08:50.029182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.914 [2024-07-15 19:08:50.029227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.914 [2024-07-15 19:08:50.029241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.033619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.033661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.033675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.038001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.038043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.042413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.042454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.042468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.046841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.046881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.046895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.051230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.051284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.055561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.055600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.055615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.059922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.059961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.059975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.064349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.064404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.064418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.068852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.068892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.068906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.073401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.073456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.073470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.077966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.078027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.078043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.082380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.082421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.082434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.086809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.086849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.086862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.091131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.091171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.091185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.095606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.095658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.100012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.100052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.100065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.104420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.104460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.104474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.108883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.108931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.113399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.113440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.113455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.117942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.117983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.117997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.122362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.122402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.126848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.126902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.126915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.131318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.131373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.131386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.915 [2024-07-15 19:08:50.135744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.915 [2024-07-15 19:08:50.135798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.915 [2024-07-15 19:08:50.135812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.140144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.140184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.140198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.144574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.149014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.149054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.153534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.153573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.153587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.157999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.158084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.162636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.162675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.167052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.167091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.167105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.171311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.171350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.171364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.175832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.175872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.175886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.180347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.180387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.180400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.184934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.184977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.184990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.189519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.189585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.194013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.194067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.194080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.916 [2024-07-15 19:08:50.198490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:22.916 [2024-07-15 19:08:50.198541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.916 [2024-07-15 19:08:50.198555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.176 [2024-07-15 19:08:50.202903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.176 [2024-07-15 19:08:50.202943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.176 [2024-07-15 19:08:50.202957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.207328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.207369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.211750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.211805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.211819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.216198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.216240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.220625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.220665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.220679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.225089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.225130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.225143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.229422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.229478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.229493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.233946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.234003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.238397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.238468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.238482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.242896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.242951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.242966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.247346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.247387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.247400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.251875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.251914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.256342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.256383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.256397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.260637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.260677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.260691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.265082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.265121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.265134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.269485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.269539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.269553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.273898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.273937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.273950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.278319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.278360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.282771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.282812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.282825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.287137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.287183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.287197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.291475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.291527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.295848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.295888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.295902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.300224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.300265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.300279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.304662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.304710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.304725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.309031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.309071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.309085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.313378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.313418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.313431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.317809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.317848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.317861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.322191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.322231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.322244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.326650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.326704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.177 [2024-07-15 19:08:50.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.177 [2024-07-15 19:08:50.331261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.177 [2024-07-15 19:08:50.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.331329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.335741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.335794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.335823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.340280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.340319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.340332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.344882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.344921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.344935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.349324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.349376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.349406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.353802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.353856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.353885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.358174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.358230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.358259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.362748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.362801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.362831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.367085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.367141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.367154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.371583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.371636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.371666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.375895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.375950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.380508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.380573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.380588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.384953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.384993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.385006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.389385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.389439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.389469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.393886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.393954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.393984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.398394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.398435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.398448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.402917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.402971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.407216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.407270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.411688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.411770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.416113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.416167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.416196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.420582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.420635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.420665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.425086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.425127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.425140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.429421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.429475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.429488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.433877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.433948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.433961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.438412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.438484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.438512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.442710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.442764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.442778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.447167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.447207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.447221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.451494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.451561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.451575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.455952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.178 [2024-07-15 19:08:50.456008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.178 [2024-07-15 19:08:50.456022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.178 [2024-07-15 19:08:50.460386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.179 [2024-07-15 19:08:50.460426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.179 [2024-07-15 19:08:50.460439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.464730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.464770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.464784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.469125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.469193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.469206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.473632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.473701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.478081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.478121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.478135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.482437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.482492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.482517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.486825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.486868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.439 [2024-07-15 19:08:50.486882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.439 [2024-07-15 19:08:50.491147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.439 [2024-07-15 19:08:50.491186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.491200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.495640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.495693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.495707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.500083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.500123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.500136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.504570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.504626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.504640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.509085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.509127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.509146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.513532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.513573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.513586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.517907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.517960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.522236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.522278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.522291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.526653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.526693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.526706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.530920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.530960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.530974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.535387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.535427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.535440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.539733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.539773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.539786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.544082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.544121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.544135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.548468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.548518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.548533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.552885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.552937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.557328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.557368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.557381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.561758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.561797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.561811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.566104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.566144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.566157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.570483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.570536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.570550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.574811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.574852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.574865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.579200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.579242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.579256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.583566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.583605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.583620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.587970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.588012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.588027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.592347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.592387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.592401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.596822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.596863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.596877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.601224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.601266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.601280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.605653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.605693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.605706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.610039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.610080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.610094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.614402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.614442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.614457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.440 [2024-07-15 19:08:50.618896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.440 [2024-07-15 19:08:50.618937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.440 [2024-07-15 19:08:50.618951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.623237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.623277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.623291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.627648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.627689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.627702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.631991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.632032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.636380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.636421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.636434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.640800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.640849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.640863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.645270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.645312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.645325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.649714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.649754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.654122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.654162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.654176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.658516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.658552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.658565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.662882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.662923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.662936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.667282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.667322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.667336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.671648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.671686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.671700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.675893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.675945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.680384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.680454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.684889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.684928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.684941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.689295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.689337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.693771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.693825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.693838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.698171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.698240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.702663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.702705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.702718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.707082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.707122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.707136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.711415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.711455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.711468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.715846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.715886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.715900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.720128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.720168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.720181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.441 [2024-07-15 19:08:50.724541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.441 [2024-07-15 19:08:50.724580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.441 [2024-07-15 19:08:50.724593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.728990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.729029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.729043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.733432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.733473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.733487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.737846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.737886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.737899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.742179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.742219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.742233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.746486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.746536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.746550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.750829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.702 [2024-07-15 19:08:50.750868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.702 [2024-07-15 19:08:50.750881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.702 [2024-07-15 19:08:50.755211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.755252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.755265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.759692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.759732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.759747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.764066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.764110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.764123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.768485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.768537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.768552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.772847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.772903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.777216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.777256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.781601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.781641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.781654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.786005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.786053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.786066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.790433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.790474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.790488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.794796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.794839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.794852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.799212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.799253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.799266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.803676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.803718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.803731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.808172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.808230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.812560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.812614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.816898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.816938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.816952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.821300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.821340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.821353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.825619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.825658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.825672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.830045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.830086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.830099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.834453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.834495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.838818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.838865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.838878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.843317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.843357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.843371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.847706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.847758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.847772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.852184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.852226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.852241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.856615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.856654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.856669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.861034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.861073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.861098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.865410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.865450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.865463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.869809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.869849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.869863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.874210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.874249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.874262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.703 [2024-07-15 19:08:50.878604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.703 [2024-07-15 19:08:50.878643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.703 [2024-07-15 19:08:50.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.882966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.883005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.883019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.887382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.891772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.891812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.891826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.896236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.896275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.896289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.900637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.900677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.900690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.905071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.905110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.905123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.909587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.909644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.909661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.914010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.914051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.914065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.918603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.918650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.918663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.923119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.923160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.923174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.927468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.927536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.931944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.932005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.932019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.936397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.936454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.936467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.940854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.940896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.940911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.945298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.945355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.945368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.949853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.949895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.949908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.954280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.954353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.958692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.958748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.958762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.963106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.963147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.963160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.967599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.967655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.967668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.972051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.972109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.976496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.976548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.976562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.980923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.980964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.980978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.985322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.985362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.704 [2024-07-15 19:08:50.989840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.704 [2024-07-15 19:08:50.989881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.704 [2024-07-15 19:08:50.989894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:50.994311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:50.994353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:50.994367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:50.998709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:50.998750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:50.998764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.003103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.007570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.007605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.007618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.011967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.012012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.012027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.016622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.016682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.021133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.025698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.025738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.025753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.030133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.030190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.967 [2024-07-15 19:08:51.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.967 [2024-07-15 19:08:51.034577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.967 [2024-07-15 19:08:51.034616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.034646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.038987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.039028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.039043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.043431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.043474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.043489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.047952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.048012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.052332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.052378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.052393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.056718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.056764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.056779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.061090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.061133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.061148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.065462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.065517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.065534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.069934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.069976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.069991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.074348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.074390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.074405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.078992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.079036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.079051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.083410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.083464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.083479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.087974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.088021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.092436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.092481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.092495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.096940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.096983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.096997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.101246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.101287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.101302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.105684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.105725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.105739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.110199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.110258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.110273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.114703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.114741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.114771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.119178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.119221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.119236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.123588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.123628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.123642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.128017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.132519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.132559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.132573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.136903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.136944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.136959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.141231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.141275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.141290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.145612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.145653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.145668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.150069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.150111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.150126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.154533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.154587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.158954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.968 [2024-07-15 19:08:51.159010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.968 [2024-07-15 19:08:51.163351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.968 [2024-07-15 19:08:51.163392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.163407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.167768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.167810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.167825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.172164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.172207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.172221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.176586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.176642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.181115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.181159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.181174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.185591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.185628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.185642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.189952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.189994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.190008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.194370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.194413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.194428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.198877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.198919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.198933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.203381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.203422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.203437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.207784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.207824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.207854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.212284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.212326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.212340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.216842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.216883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.216897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.221479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.221537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.221552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.225979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.226019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.230504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.230584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.234833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.234872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.234904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.239181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.239222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.239252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.243602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.243641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.247996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.248037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.248052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.969 [2024-07-15 19:08:51.252574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:23.969 [2024-07-15 19:08:51.252631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.969 [2024-07-15 19:08:51.252646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.256968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.257009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.257023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.261463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.261518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.261534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.265933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.265977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.265991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.270382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.270424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.270439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.274843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.274884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.274899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.279181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.279223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.279238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.283566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.283606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.283621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.287969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.288011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.288026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.292348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.292392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.292407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.296726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.296768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.296783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.301152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.301195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.305596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.305643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.305658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.310056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.310100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.310115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.314441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.314482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.314514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.318814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.318856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.318870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.323186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.323228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.323243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.327622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.327660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.327674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.332026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.332068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.336435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.336481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.336496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.340910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.340954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.340969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.345350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.345393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.345408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.349708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.349751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.349766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.354176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.354249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.358717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.358758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.358774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.363133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.363173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.363203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.367673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.367714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.367745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.372023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.372065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.372081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.376499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.376559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.243 [2024-07-15 19:08:51.376574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.243 [2024-07-15 19:08:51.380958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.243 [2024-07-15 19:08:51.381007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.381022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.385371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.385413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.385427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.389850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.389891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.389923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.394278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.394319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.394350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.398715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.398755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.398786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.403058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.407492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.407548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.411850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.411892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.411906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.416191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.416234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.416249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.420650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.420714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.420730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.425111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.425153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.425167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.429681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.429743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.429758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.434173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.434215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.438700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.438740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.438771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.443086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.443159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.447604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.447644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.447659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.452039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.452080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.452095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.456468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.456551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.456567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.460913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.460955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.465262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.465304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.465319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.469870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.469914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.469928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.474193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.474234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.474248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.478631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.478672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.478687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.482985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.483026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.483041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.487341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.487382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.487397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.491684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.491727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.491742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.496064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.496106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.496120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.500430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.500471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.500485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.504840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.244 [2024-07-15 19:08:51.504882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.244 [2024-07-15 19:08:51.504897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.244 [2024-07-15 19:08:51.509288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.245 [2024-07-15 19:08:51.509330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.245 [2024-07-15 19:08:51.509346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.245 [2024-07-15 19:08:51.513703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.245 [2024-07-15 19:08:51.513744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.245 [2024-07-15 19:08:51.513758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.245 [2024-07-15 19:08:51.518051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.245 [2024-07-15 19:08:51.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.245 [2024-07-15 19:08:51.518108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.245 [2024-07-15 19:08:51.522341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.245 [2024-07-15 19:08:51.522382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.245 [2024-07-15 19:08:51.522396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.245 [2024-07-15 19:08:51.526758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.245 [2024-07-15 19:08:51.526800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.245 [2024-07-15 19:08:51.526815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.531144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.531187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.531202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.535597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.535638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.540025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.540081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.544468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.544521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.544538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.548847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.548887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.548901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.553242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.553301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.557735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.557778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.557793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.562107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.562149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.562163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.566410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.566452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.566467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.570770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.570812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.570827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.575200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.575256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.579611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.579651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.579665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.583939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.505 [2024-07-15 19:08:51.583981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.505 [2024-07-15 19:08:51.583996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.505 [2024-07-15 19:08:51.588366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.588409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.588423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.592743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.592784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.592799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.597149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.597192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.597207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.601583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.601623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.601637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.605978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.606019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.606034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.610317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.610357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.610372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.614778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.614819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.614834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.619068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.623426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.623468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.623483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.627770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.627811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.627826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.632211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.632254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.632268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.636763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.636814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.636831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.641223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.641279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.641294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.645691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.645734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.645749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.650107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.650151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.650167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.654575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.654619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.654635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.659003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.659046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.659061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.663524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.663567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.663583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.667991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.668036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.668050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.672459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.672517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.672534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.676988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.677035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.677050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.681402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.681448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.681463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.685778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.685837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.690205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.694692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.694736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.694751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.699084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.699128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.699143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.703616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.703675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.708041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.708085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.506 [2024-07-15 19:08:51.708099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.506 [2024-07-15 19:08:51.712512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.506 [2024-07-15 19:08:51.712555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.712570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.716936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.716985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.721409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.721463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.721480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.725942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.725986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.726001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.730355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.730399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.734785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.734828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.739167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.739210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.739225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.743624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.743667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.743682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.748045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.748087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.748101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.752436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.752494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.756914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.756956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.756971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.761330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.761373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.761388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.765836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.765878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.765892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.770210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.770253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.770268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.774630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.774672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.774688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.778925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.778968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.778982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.783381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.783428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.783443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.787777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.787819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.787833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.507 [2024-07-15 19:08:51.792179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.507 [2024-07-15 19:08:51.792221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.507 [2024-07-15 19:08:51.792236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.796601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.796642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.796658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.800875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.800918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.800932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.805308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.805352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.805367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.809718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.809760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.809775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.814180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.814226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.814241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.818649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.818692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.818707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.823111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.823155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.823170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.827529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.827569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.827584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.831953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.831995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.832010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.768 [2024-07-15 19:08:51.836373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.768 [2024-07-15 19:08:51.836414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.768 [2024-07-15 19:08:51.836429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.840886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.840928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.840943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.845207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.845249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.845264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.849604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.849644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.849659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.854025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.854067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.854082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.858428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.858470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.862860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.862916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.867270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.867311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.867326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.871806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.871848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.871862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.876276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.876335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.880752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.880795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.885106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.885149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.885168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.889581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.889622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.889636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.893997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.894039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.894054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.898404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.898453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.898468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.902984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.903027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.903042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.907377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.907420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.907435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.911897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.911954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.916373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.916415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.920794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.920835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.920850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.925264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.925307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.929695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.929735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.929750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.934117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.934161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.934176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.938496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.938551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.938565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.942959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.943001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.943016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.947390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.947431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.947446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.951817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.951860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.951875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.956234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.956276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.956291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.960709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.960749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.960764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.769 [2024-07-15 19:08:51.965106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.769 [2024-07-15 19:08:51.965148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.769 [2024-07-15 19:08:51.965163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.969423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.969464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.969479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.973894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.973935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.973950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.978340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.978382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.978396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.982719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.982774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.987118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.987161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.991585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.991626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.991640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:51.995921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:51.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:51.995987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:52.000293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:52.000335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:52.000349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.770 [2024-07-15 19:08:52.004581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6fe0) 00:18:24.770 [2024-07-15 19:08:52.004622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.770 [2024-07-15 19:08:52.004636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.770 00:18:24.770 Latency(us) 00:18:24.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.770 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:24.770 nvme0n1 : 2.00 6973.12 871.64 0.00 0.00 2290.40 2010.76 9413.35 00:18:24.770 =================================================================================================================== 00:18:24.770 Total : 6973.12 871.64 0.00 0.00 2290.40 2010.76 9413.35 00:18:24.770 0 00:18:24.770 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:24.770 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:24.770 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:24.770 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:24.770 | .driver_specific 00:18:24.770 | .nvme_error 00:18:24.770 | .status_code 00:18:24.770 | .command_transient_transport_error' 00:18:25.029 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 450 > 0 )) 00:18:25.029 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80619 00:18:25.029 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80619 ']' 00:18:25.029 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80619 00:18:25.029 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80619 00:18:25.289 killing process with pid 80619 00:18:25.289 Received shutdown signal, test time was about 2.000000 seconds 00:18:25.289 00:18:25.289 Latency(us) 00:18:25.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.289 =================================================================================================================== 00:18:25.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80619' 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80619 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80619 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80679 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80679 /var/tmp/bperf.sock 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80679 ']' 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.289 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:25.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:25.548 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.548 19:08:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.548 [2024-07-15 19:08:52.632039] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:25.548 [2024-07-15 19:08:52.632413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80679 ] 00:18:25.548 [2024-07-15 19:08:52.770198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.807 [2024-07-15 19:08:52.886031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.807 [2024-07-15 19:08:52.940284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:26.396 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.396 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:26.396 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.396 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.654 19:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.911 nvme0n1 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:26.911 19:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:27.170 Running I/O for 2 seconds... 00:18:27.170 [2024-07-15 19:08:54.305955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fef90 00:18:27.170 [2024-07-15 19:08:54.308569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.308616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.322334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190feb58 00:18:27.170 [2024-07-15 19:08:54.324987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.325032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.338816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fe2e8 00:18:27.170 [2024-07-15 19:08:54.341332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.341375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.355226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fda78 00:18:27.170 [2024-07-15 19:08:54.357815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.357861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.371772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fd208 00:18:27.170 [2024-07-15 19:08:54.374257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.388108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fc998 00:18:27.170 [2024-07-15 19:08:54.390594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.390638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.404403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fc128 00:18:27.170 [2024-07-15 19:08:54.406857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.406901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.420769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fb8b8 00:18:27.170 [2024-07-15 19:08:54.423174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.423218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.436980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fb048 00:18:27.170 [2024-07-15 19:08:54.439360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:27.170 [2024-07-15 19:08:54.453188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fa7d8 00:18:27.170 [2024-07-15 19:08:54.455564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.170 [2024-07-15 19:08:54.455608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.469435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f9f68 00:18:27.428 [2024-07-15 19:08:54.471786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.471833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.485509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f96f8 00:18:27.428 [2024-07-15 19:08:54.487806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.487851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.501471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f8e88 00:18:27.428 [2024-07-15 19:08:54.503765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.503808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.517542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f8618 00:18:27.428 [2024-07-15 19:08:54.519834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.519879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.533775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f7da8 00:18:27.428 [2024-07-15 19:08:54.536034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.536078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.549876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f7538 00:18:27.428 [2024-07-15 19:08:54.552138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.552183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.565953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f6cc8 00:18:27.428 [2024-07-15 19:08:54.568179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.568223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.582018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f6458 00:18:27.428 [2024-07-15 19:08:54.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.584248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.598136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f5be8 00:18:27.428 [2024-07-15 19:08:54.600327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.600371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.614289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f5378 00:18:27.428 [2024-07-15 19:08:54.616447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.616490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.630347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f4b08 00:18:27.428 [2024-07-15 19:08:54.632472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.632522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.646156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f4298 00:18:27.428 [2024-07-15 19:08:54.648235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.648273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.662133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f3a28 00:18:27.428 [2024-07-15 19:08:54.664267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.664309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.678198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f31b8 00:18:27.428 [2024-07-15 19:08:54.680284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.680327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.694218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f2948 00:18:27.428 [2024-07-15 19:08:54.696308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.696352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:27.428 [2024-07-15 19:08:54.710352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f20d8 00:18:27.428 [2024-07-15 19:08:54.712414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.428 [2024-07-15 19:08:54.712458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.726640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f1868 00:18:27.686 [2024-07-15 19:08:54.728699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.728754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.743046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f0ff8 00:18:27.686 [2024-07-15 19:08:54.745110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.745160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.759664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f0788 00:18:27.686 [2024-07-15 19:08:54.761680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.761727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.775964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eff18 00:18:27.686 [2024-07-15 19:08:54.777989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.778035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.792204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ef6a8 00:18:27.686 [2024-07-15 19:08:54.794194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.794242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.808581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eee38 00:18:27.686 [2024-07-15 19:08:54.810548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.810591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.824808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ee5c8 00:18:27.686 [2024-07-15 19:08:54.826732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.826777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.841067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190edd58 00:18:27.686 [2024-07-15 19:08:54.842971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.843017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.857276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ed4e8 00:18:27.686 [2024-07-15 19:08:54.859155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.859200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.873456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ecc78 00:18:27.686 [2024-07-15 19:08:54.875317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.875362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.889657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ec408 00:18:27.686 [2024-07-15 19:08:54.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.891537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.905796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ebb98 00:18:27.686 [2024-07-15 19:08:54.907619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.907663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.921910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eb328 00:18:27.686 [2024-07-15 19:08:54.923703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.923745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.938018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eaab8 00:18:27.686 [2024-07-15 19:08:54.939806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.939849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.954074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ea248 00:18:27.686 [2024-07-15 19:08:54.955828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.955867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:27.686 [2024-07-15 19:08:54.969906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e99d8 00:18:27.686 [2024-07-15 19:08:54.971618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.686 [2024-07-15 19:08:54.971659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:54.986013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e9168 00:18:27.944 [2024-07-15 19:08:54.987740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:54.987781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.001883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e88f8 00:18:27.944 [2024-07-15 19:08:55.003541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:55.003578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.017882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e8088 00:18:27.944 [2024-07-15 19:08:55.019584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:55.019629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.033852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e7818 00:18:27.944 [2024-07-15 19:08:55.035465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:55.035516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.049655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e6fa8 00:18:27.944 [2024-07-15 19:08:55.051243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:55.051279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.065643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e6738 00:18:27.944 [2024-07-15 19:08:55.067222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.944 [2024-07-15 19:08:55.067259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:27.944 [2024-07-15 19:08:55.081446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e5ec8 00:18:27.945 [2024-07-15 19:08:55.083013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.083053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.097263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e5658 00:18:27.945 [2024-07-15 19:08:55.098831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.098870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.113738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e4de8 00:18:27.945 [2024-07-15 19:08:55.115317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.115357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.130312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e4578 00:18:27.945 [2024-07-15 19:08:55.131864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.131901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.146599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e3d08 00:18:27.945 [2024-07-15 19:08:55.148195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.148234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.162775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e3498 00:18:27.945 [2024-07-15 19:08:55.164342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.164377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.178593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e2c28 00:18:27.945 [2024-07-15 19:08:55.180071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.180107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.194480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e23b8 00:18:27.945 [2024-07-15 19:08:55.195956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.195992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.210456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e1b48 00:18:27.945 [2024-07-15 19:08:55.211910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.211947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:27.945 [2024-07-15 19:08:55.226422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e12d8 00:18:27.945 [2024-07-15 19:08:55.227857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.945 [2024-07-15 19:08:55.227895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.242211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e0a68 00:18:28.204 [2024-07-15 19:08:55.243585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.243621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.257994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e01f8 00:18:28.204 [2024-07-15 19:08:55.259361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.273986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190df988 00:18:28.204 [2024-07-15 19:08:55.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.275353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.289831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190df118 00:18:28.204 [2024-07-15 19:08:55.291140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.291180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.305779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190de8a8 00:18:28.204 [2024-07-15 19:08:55.307097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.307141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.322182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190de038 00:18:28.204 [2024-07-15 19:08:55.323514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.323552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.345177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190de038 00:18:28.204 [2024-07-15 19:08:55.347730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.347779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.361291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190de8a8 00:18:28.204 [2024-07-15 19:08:55.363827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.363875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.377717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190df118 00:18:28.204 [2024-07-15 19:08:55.380300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.380355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.394375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190df988 00:18:28.204 [2024-07-15 19:08:55.397002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.397054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.411050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e01f8 00:18:28.204 [2024-07-15 19:08:55.413554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.413597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.427590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e0a68 00:18:28.204 [2024-07-15 19:08:55.430072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.430118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.444276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e12d8 00:18:28.204 [2024-07-15 19:08:55.446703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.446749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.460464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e1b48 00:18:28.204 [2024-07-15 19:08:55.462870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.462920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:28.204 [2024-07-15 19:08:55.476860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e23b8 00:18:28.204 [2024-07-15 19:08:55.479294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.204 [2024-07-15 19:08:55.479342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.493358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e2c28 00:18:28.463 [2024-07-15 19:08:55.495780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.495831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.510080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e3498 00:18:28.463 [2024-07-15 19:08:55.512490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.512551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.526807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e3d08 00:18:28.463 [2024-07-15 19:08:55.529168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.529246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.543548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e4578 00:18:28.463 [2024-07-15 19:08:55.545870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.545935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.560009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e4de8 00:18:28.463 [2024-07-15 19:08:55.562321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.562370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.576841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e5658 00:18:28.463 [2024-07-15 19:08:55.579176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.579227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.593760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e5ec8 00:18:28.463 [2024-07-15 19:08:55.596098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.596151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.610840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e6738 00:18:28.463 [2024-07-15 19:08:55.613134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.613184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.627478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e6fa8 00:18:28.463 [2024-07-15 19:08:55.629803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.643739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e7818 00:18:28.463 [2024-07-15 19:08:55.645921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.645965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.659829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e8088 00:18:28.463 [2024-07-15 19:08:55.661971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.662016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.676026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e88f8 00:18:28.463 [2024-07-15 19:08:55.678197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.678243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.692057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e9168 00:18:28.463 [2024-07-15 19:08:55.694184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.708159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190e99d8 00:18:28.463 [2024-07-15 19:08:55.710277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.710320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.724643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ea248 00:18:28.463 [2024-07-15 19:08:55.726780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:28.463 [2024-07-15 19:08:55.741249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eaab8 00:18:28.463 [2024-07-15 19:08:55.743339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.463 [2024-07-15 19:08:55.743385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.757755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eb328 00:18:28.722 [2024-07-15 19:08:55.759845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.759892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.773782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ebb98 00:18:28.722 [2024-07-15 19:08:55.775775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.775817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.789817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ec408 00:18:28.722 [2024-07-15 19:08:55.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.791889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.805886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ecc78 00:18:28.722 [2024-07-15 19:08:55.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.807962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.821979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ed4e8 00:18:28.722 [2024-07-15 19:08:55.823915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.823959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.838062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190edd58 00:18:28.722 [2024-07-15 19:08:55.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.840016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.854416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ee5c8 00:18:28.722 [2024-07-15 19:08:55.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.856431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.870812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eee38 00:18:28.722 [2024-07-15 19:08:55.872925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.872975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.887360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190ef6a8 00:18:28.722 [2024-07-15 19:08:55.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.889311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.903383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190eff18 00:18:28.722 [2024-07-15 19:08:55.905259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.905300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.919518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f0788 00:18:28.722 [2024-07-15 19:08:55.921351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.921393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.935437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f0ff8 00:18:28.722 [2024-07-15 19:08:55.937283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.937327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.951388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f1868 00:18:28.722 [2024-07-15 19:08:55.953291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.953351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.967522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f20d8 00:18:28.722 [2024-07-15 19:08:55.969309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.969353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.983611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f2948 00:18:28.722 [2024-07-15 19:08:55.985368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:55.985412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:28.722 [2024-07-15 19:08:55.999637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f31b8 00:18:28.722 [2024-07-15 19:08:56.001382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.722 [2024-07-15 19:08:56.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.015803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f3a28 00:18:28.992 [2024-07-15 19:08:56.017532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.017574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.031725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f4298 00:18:28.992 [2024-07-15 19:08:56.033468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.033523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.047718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f4b08 00:18:28.992 [2024-07-15 19:08:56.049402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.063800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f5378 00:18:28.992 [2024-07-15 19:08:56.065475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.065542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.079891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f5be8 00:18:28.992 [2024-07-15 19:08:56.081561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.081605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.096108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f6458 00:18:28.992 [2024-07-15 19:08:56.097757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.112486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f6cc8 00:18:28.992 [2024-07-15 19:08:56.114103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.114146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.128760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f7538 00:18:28.992 [2024-07-15 19:08:56.130331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.130373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.144633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f7da8 00:18:28.992 [2024-07-15 19:08:56.146214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.146253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.160882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f8618 00:18:28.992 [2024-07-15 19:08:56.162427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.162471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.177224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f8e88 00:18:28.992 [2024-07-15 19:08:56.178809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.178854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.193811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f96f8 00:18:28.992 [2024-07-15 19:08:56.195363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.195407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.209894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190f9f68 00:18:28.992 [2024-07-15 19:08:56.211376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.211418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.225901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fa7d8 00:18:28.992 [2024-07-15 19:08:56.227349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.227390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.241836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fb048 00:18:28.992 [2024-07-15 19:08:56.243269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.243311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.257819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fb8b8 00:18:28.992 [2024-07-15 19:08:56.259233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.259275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:28.992 [2024-07-15 19:08:56.273909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee7d0) with pdu=0x2000190fc128 00:18:28.992 [2024-07-15 19:08:56.275297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.992 [2024-07-15 19:08:56.275339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:29.251 00:18:29.251 Latency(us) 00:18:29.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.251 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.251 nvme0n1 : 2.00 15598.54 60.93 0.00 0.00 8197.89 5808.87 31457.28 00:18:29.251 =================================================================================================================== 00:18:29.251 Total : 15598.54 60.93 0.00 0.00 8197.89 5808.87 31457.28 00:18:29.251 0 00:18:29.251 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:29.251 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:29.251 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:29.251 | .driver_specific 00:18:29.251 | .nvme_error 00:18:29.251 | .status_code 00:18:29.251 | .command_transient_transport_error' 00:18:29.251 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80679 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80679 ']' 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80679 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80679 00:18:29.508 killing process with pid 80679 00:18:29.508 Received shutdown signal, test time was about 2.000000 seconds 00:18:29.508 00:18:29.508 Latency(us) 00:18:29.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.508 =================================================================================================================== 00:18:29.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80679' 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80679 00:18:29.508 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80679 00:18:29.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80735 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80735 /var/tmp/bperf.sock 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80735 ']' 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 19:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:29.765 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:29.765 Zero copy mechanism will not be used. 00:18:29.765 [2024-07-15 19:08:56.883799] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:29.765 [2024-07-15 19:08:56.883883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80735 ] 00:18:29.765 [2024-07-15 19:08:57.014016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.023 [2024-07-15 19:08:57.128021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.023 [2024-07-15 19:08:57.180598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:30.586 19:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.586 19:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:30.586 19:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.586 19:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.844 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.422 nvme0n1 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:31.422 19:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:31.422 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:31.422 Zero copy mechanism will not be used. 00:18:31.422 Running I/O for 2 seconds... 00:18:31.422 [2024-07-15 19:08:58.557606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.557934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.557966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.562767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.563066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.563101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.567954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.568253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.568293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.573093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.573440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.578245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.578563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.578601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.583349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.583672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.583710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.588527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.588842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.588881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.593712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.594025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.594064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.598923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.599227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.599261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.604026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.604331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.604367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.609155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.609450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.609486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.614252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.614563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.614598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.619380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.619695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.619727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.624531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.624843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.624909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.629707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.630012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.630051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.634862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.635163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.635196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.639973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.640279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.640318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.645107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.645405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.645454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.650210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.650549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.655271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.655582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.655620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.660355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.660665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.660690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.665465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.665785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.665823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.670583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.670878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.670916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.675698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.675996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.676034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.680851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.681165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.681205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.686040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.686341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.686378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.691133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.691431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.691476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.696297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.422 [2024-07-15 19:08:58.696611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.422 [2024-07-15 19:08:58.696648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.422 [2024-07-15 19:08:58.701410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.423 [2024-07-15 19:08:58.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.423 [2024-07-15 19:08:58.701757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.423 [2024-07-15 19:08:58.706525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.423 [2024-07-15 19:08:58.706821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.423 [2024-07-15 19:08:58.706856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.711605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.711907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.711948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.716725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.717023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.717073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.721810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.722107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.722141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.726883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.727179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.727215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.731980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.732295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.732341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.737143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.737445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.737487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.742320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.742633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.742660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.747422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.747737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.747785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.752574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.752888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.752924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.757770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.758076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.758110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.762863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.763161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.763195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.767965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.768263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.768297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.773098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.773441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.778221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.778538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.783355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.783671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.783712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.788512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.788823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.788856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.793615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.793934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.793967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.798750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.799057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.799101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.803896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.804203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.804242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.809024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.809334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.809373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.814282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.814601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.814634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.819410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.819719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.819759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.824577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.824904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.829776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.830098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.830129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.834962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.835271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.835317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.682 [2024-07-15 19:08:58.840153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.682 [2024-07-15 19:08:58.840475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.682 [2024-07-15 19:08:58.840525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.845313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.845631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.845668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.850397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.850711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.850745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.855534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.855836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.855872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.860619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.860938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.860973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.865750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.866077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.870884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.871191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.871231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.876030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.876333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.876373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.881110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.881413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.881452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.886277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.886590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.886617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.891362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.891678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.891710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.896124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.896195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.896218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.901147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.901220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.901242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.906235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.906307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.906332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.911276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.911349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.911373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.916366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.916440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.921416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.921530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.926478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.926566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.931560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.931636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.931658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.936630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.936705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.936739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.941690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.941763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.941786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.946793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.946891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.951793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.951895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.956894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.956982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.957006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.961933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.962002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.962024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.683 [2024-07-15 19:08:58.967006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.683 [2024-07-15 19:08:58.967076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.683 [2024-07-15 19:08:58.967098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.972044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.972117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.972139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.977117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.977196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.977218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.982179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.982248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.982271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.987247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.987320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.987342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.992297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.992370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.992392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:58.997379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:58.997451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:58.997472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.002370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.002443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.002465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.007397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.007465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.007487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.012455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.012537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.012559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.017573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.017641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.017663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.022617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.022685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.022708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.027727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.027799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.027821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.032752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.032823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.943 [2024-07-15 19:08:59.032845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.943 [2024-07-15 19:08:59.037805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.943 [2024-07-15 19:08:59.037872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.037894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.042853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.042924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.042949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.047914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.047981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.048002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.052984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.053051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.058075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.058148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.058170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.063081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.063148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.063176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.068187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.068259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.068282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.073307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.073382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.073404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.078413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.078488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.078525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.083489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.083574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.083596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.088565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.088645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.088667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.093662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.093737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.093760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.098723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.098798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.098821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.103777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.103850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.103873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.108844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.108913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.108936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.113951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.114025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.114047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.119024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.119097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.124146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.124217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.124239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.129238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.129307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.129330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.134321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.134410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.139398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.139470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.144421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.144492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.144526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.149517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.149583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.149605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.154528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.154603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.154625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.159577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.159651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.159673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.164629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.164702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.944 [2024-07-15 19:08:59.164735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.944 [2024-07-15 19:08:59.169721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.944 [2024-07-15 19:08:59.169789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.174802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.174875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.174898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.179886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.179953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.179975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.184978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.185048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.185070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.190010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.190078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.190100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.195033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.195099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.195121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.200081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.200149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.205174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.205264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.210215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.210283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.210305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.215212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.215279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.215301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.220219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.220286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.225264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.225330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.225352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.945 [2024-07-15 19:08:59.230311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:31.945 [2024-07-15 19:08:59.230384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.945 [2024-07-15 19:08:59.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.235366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.235455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.240430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.240516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.240539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.245458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.245543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.245565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.250436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.250516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.250543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.255412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.255483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.255517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.260407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.260496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.265455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.265537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.265559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.270543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.270614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.270636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.275553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.275621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.275643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.280553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.280620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.280642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.285560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.285629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.290573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.290639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.290660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.295617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.205 [2024-07-15 19:08:59.295685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.205 [2024-07-15 19:08:59.295707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.205 [2024-07-15 19:08:59.300639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.300718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.300740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.305700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.305769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.305791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.310694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.310766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.310788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.315687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.315759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.315781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.320723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.320792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.320814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.325722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.325790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.325812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.330721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.330789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.330810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.335755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.335825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.335848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.340762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.340829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.340851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.345720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.345789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.345812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.350750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.350823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.355835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.355918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.355943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.360921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.360999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.365958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.366033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.366056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.370976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.371051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.371075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.376037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.376110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.376142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.381134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.381204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.381237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.386153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.386222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.386246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.391146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.391213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.391234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.396179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.396244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.396267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.401284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.401352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.401374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.406339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.406407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.406429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.411347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.411421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.411445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.416392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.416469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.416493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.421462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.421544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.421566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.426460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.426543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.426566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.431473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.431558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.431580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.436434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.436513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.436535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.441528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.206 [2024-07-15 19:08:59.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.206 [2024-07-15 19:08:59.441624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.206 [2024-07-15 19:08:59.446524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.446591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.446613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.451491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.451575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.451597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.456538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.456608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.456629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.461609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.461681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.461703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.466633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.466708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.466730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.471675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.471746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.471768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.476740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.476813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.476836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.481809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.481879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.481902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.486867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.486941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.486963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.207 [2024-07-15 19:08:59.491940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.207 [2024-07-15 19:08:59.492014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.207 [2024-07-15 19:08:59.492038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.497017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.497091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.497115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.502044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.502117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.502141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.507070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.507141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.512144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.512215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.512238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.517220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.517291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.517314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.522277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.522350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.522373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.527315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.527400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.527423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.532376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.532447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.532470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.537466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.537551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.537576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.542486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.542566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.542590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.547474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.547556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.547579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.552622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.552695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.552728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.557652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.557726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.557748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.562672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.467 [2024-07-15 19:08:59.562763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.467 [2024-07-15 19:08:59.567709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.467 [2024-07-15 19:08:59.567779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.572777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.572847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.572869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.577764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.577831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.577853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.582845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.582913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.582935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.587848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.587942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.592913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.592981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.593002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.597948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.598017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.598039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.602976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.603069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.608007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.608083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.608105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.613009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.613076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.613097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.618081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.618149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.623119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.623191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.623212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.628133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.628204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.628225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.633188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.633258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.633280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.638213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.638281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.638303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.643260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.643332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.643354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.648301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.648373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.648394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.653337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.653426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.658393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.658462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.658484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.663398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.663465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.663487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.668404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.668471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.668493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.673486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.673572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.673594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.678563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.678631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.678653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.683560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.683629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.683650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.688599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.688664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.688686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.693687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.693756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.693778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.698723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.698790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.698812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.703767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.703840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.703863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.708758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.708826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.468 [2024-07-15 19:08:59.708848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.468 [2024-07-15 19:08:59.713822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.468 [2024-07-15 19:08:59.713890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.713912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.718893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.718959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.718981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.723925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.723992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.724013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.728983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.729055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.729077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.734033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.734101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.734122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.739060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.739132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.739154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.744080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.744153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.744176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.749141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.749215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.749236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.469 [2024-07-15 19:08:59.754202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.469 [2024-07-15 19:08:59.754270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.469 [2024-07-15 19:08:59.754292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.759252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.759322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.759344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.764302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.764390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.769341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.769423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.769444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.774335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.774403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.774425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.779379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.779448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.779470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.784425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.784492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.784529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.789428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.789494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.789528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.794471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.794577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.799553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.799628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.799651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.804610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.804677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.804699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.809626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.814661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.814727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.729 [2024-07-15 19:08:59.819713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.729 [2024-07-15 19:08:59.819782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.729 [2024-07-15 19:08:59.819804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.824750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.824817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.829762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.829831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.829852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.834827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.834900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.834923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.839884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.839953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.839976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.844927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.844994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.845016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.849920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.849992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.850014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.854956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.855030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.855052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.859994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.860063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.865030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.865106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.865128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.870036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.870103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.870125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.875031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.875100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.875121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.880079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.880159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.880181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.885160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.885228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.885250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.890216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.890288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.895279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.895374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.900329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.900397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.900419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.905402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.905474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.905496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.910437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.910525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.910548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.915475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.915559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.915582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.920529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.920597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.920618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.925608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.925681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.925704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.930656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.930726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.930749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.935721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.935793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.935816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.940778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.940846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.940869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.945807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.945875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.945898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.950845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.950917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.950940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.955869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.955939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.955961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.960884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.960966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.730 [2024-07-15 19:08:59.960990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.730 [2024-07-15 19:08:59.965955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.730 [2024-07-15 19:08:59.966028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.966053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.970970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.971045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.971068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.976037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.976106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.976130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.981090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.981162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.981185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.986092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.986165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.986189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.991136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.991207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.991229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:08:59.996167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:08:59.996233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:08:59.996255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:09:00.001196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:09:00.001265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:09:00.001289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:09:00.006258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:09:00.006328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:09:00.006351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:09:00.011293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:09:00.011365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:09:00.011388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.731 [2024-07-15 19:09:00.016326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.731 [2024-07-15 19:09:00.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.731 [2024-07-15 19:09:00.016419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.021324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.021400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.021423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.026381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.026448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.026470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.031453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.031537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.031561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.036487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.036569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.036591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.041522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.041594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.041616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.046592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.046661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.046684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.051621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.051695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.051717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.056694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.056781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.056804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.061823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.061901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.061925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.066879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.066952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.066974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.071897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.071969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.071992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.076926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.076994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.077016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.081925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.081994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.082016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.086923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.087000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.087023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.091919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.091986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.092008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.096976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.097069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.102031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.102101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.102123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.107038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.107103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.107125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.112064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.991 [2024-07-15 19:09:00.112133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.991 [2024-07-15 19:09:00.112156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.991 [2024-07-15 19:09:00.117119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.117192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.117213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.122141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.122209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.122231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.127142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.127214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.132126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.132216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.137220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.137314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.142250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.142317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.147293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.147366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.147388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.152346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.152417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.152440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.157429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.157514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.157536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.162525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.162597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.162619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.167540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.167605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.167629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.172489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.172568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.172591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.177666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.177739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.177761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.182737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.182811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.182835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.187803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.187899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.192901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.192970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.192992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.198003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.198078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.198100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.203090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.203163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.203188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.208224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.208299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.208323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.213320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.213394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.213419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.218376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.218448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.218471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.223454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.223541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.228619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.228705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.228739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.233773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.233849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.233873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.238822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.238895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.238917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.243918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.243991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.244014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.248982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.249066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.249087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.254058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.254126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.254148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.992 [2024-07-15 19:09:00.259148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.992 [2024-07-15 19:09:00.259226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.992 [2024-07-15 19:09:00.259249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.993 [2024-07-15 19:09:00.264249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.993 [2024-07-15 19:09:00.264321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.993 [2024-07-15 19:09:00.264342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.993 [2024-07-15 19:09:00.269348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.993 [2024-07-15 19:09:00.269425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.993 [2024-07-15 19:09:00.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.993 [2024-07-15 19:09:00.274385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.993 [2024-07-15 19:09:00.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.993 [2024-07-15 19:09:00.274483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.993 [2024-07-15 19:09:00.279448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:32.993 [2024-07-15 19:09:00.279536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.279558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.284534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.284602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.284624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.289628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.289695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.289716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.294683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.294773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.299752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.299821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.299843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.304814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.304893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.304915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.309854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.309924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.309945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.314871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.314945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.314967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.319908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.319980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.320001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.324926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.324994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.325016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.330014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.330083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.330104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.335046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.335123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.335144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.340092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.340165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.340186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.345132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.345222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.350159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.350227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.350249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.355187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.355276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.360144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.360213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.365164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.365253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.370195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.370264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.370285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.375168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.375234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.380211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.380287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.380309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.385241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.385310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.385332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.390265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.390337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.390359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.395326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.395398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.395419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.400355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.400426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.400448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.405438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.405517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.405540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.410448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.410532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.415475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.415558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.415580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.420520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.420587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.420608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.425573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.425662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.430573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.430668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.435654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.435727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.435748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.440586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.440652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.440674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.445588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.445654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.445676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.450584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.450650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.450673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.455584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.455652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.455673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.460598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.460665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.460687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.465636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.465706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.470642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.470712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.470733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.475691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.475758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.475779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.480792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.480870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.480893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.485821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.485894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.485916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.490836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.490907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.490929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.495859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.495926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.495948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.500960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.501052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.505993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.506063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.506085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.510999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.511066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.511087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.516048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.516123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.516145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.521120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.521187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.521210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.526186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.526257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.526279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.531262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.531358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:33.252 [2024-07-15 19:09:00.536308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.252 [2024-07-15 19:09:00.536375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.252 [2024-07-15 19:09:00.536397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.511 [2024-07-15 19:09:00.541390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.511 [2024-07-15 19:09:00.541456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.511 [2024-07-15 19:09:00.541478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:33.511 [2024-07-15 19:09:00.546467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ee970) with pdu=0x2000190fef90 00:18:33.511 [2024-07-15 19:09:00.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.511 [2024-07-15 19:09:00.546572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:33.511 00:18:33.511 Latency(us) 00:18:33.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.511 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:33.511 nvme0n1 : 2.00 6103.09 762.89 0.00 0.00 2615.52 2070.34 10187.87 00:18:33.511 =================================================================================================================== 00:18:33.511 Total : 6103.09 762.89 0.00 0.00 2615.52 2070.34 10187.87 00:18:33.511 0 00:18:33.511 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:33.511 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:33.511 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:33.511 | .driver_specific 00:18:33.511 | .nvme_error 00:18:33.511 | .status_code 00:18:33.511 | .command_transient_transport_error' 00:18:33.511 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80735 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80735 ']' 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80735 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80735 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:33.769 killing process with pid 80735 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80735' 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80735 00:18:33.769 Received shutdown signal, test time was about 2.000000 seconds 00:18:33.769 00:18:33.769 Latency(us) 00:18:33.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.769 =================================================================================================================== 00:18:33.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.769 19:09:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80735 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80526 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80526 ']' 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80526 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.769 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80526 00:18:34.027 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.027 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.027 killing process with pid 80526 00:18:34.028 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80526' 00:18:34.028 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80526 00:18:34.028 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80526 00:18:34.028 00:18:34.028 real 0m18.752s 00:18:34.028 user 0m36.491s 00:18:34.028 sys 0m4.777s 00:18:34.028 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.028 19:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.028 ************************************ 00:18:34.028 END TEST nvmf_digest_error 00:18:34.028 ************************************ 00:18:34.285 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:34.285 19:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:34.285 19:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.286 rmmod nvme_tcp 00:18:34.286 rmmod nvme_fabrics 00:18:34.286 rmmod nvme_keyring 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80526 ']' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80526 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80526 ']' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80526 00:18:34.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80526) - No such process 00:18:34.286 Process with pid 80526 is not found 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80526 is not found' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:34.286 00:18:34.286 real 0m38.660s 00:18:34.286 user 1m14.092s 00:18:34.286 sys 0m9.981s 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.286 19:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 ************************************ 00:18:34.286 END TEST nvmf_digest 00:18:34.286 ************************************ 00:18:34.286 19:09:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:34.286 19:09:01 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:34.286 19:09:01 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:34.286 19:09:01 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:34.286 19:09:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:34.286 19:09:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.286 19:09:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 ************************************ 00:18:34.286 START TEST nvmf_host_multipath 00:18:34.286 ************************************ 00:18:34.286 19:09:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:34.545 * Looking for test storage... 00:18:34.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.545 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:34.546 Cannot find device "nvmf_tgt_br" 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.546 Cannot find device "nvmf_tgt_br2" 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:34.546 Cannot find device "nvmf_tgt_br" 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:34.546 Cannot find device "nvmf_tgt_br2" 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.546 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:34.814 00:18:34.814 --- 10.0.0.2 ping statistics --- 00:18:34.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.814 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:34.814 00:18:34.814 --- 10.0.0.3 ping statistics --- 00:18:34.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.814 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:34.814 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:34.814 00:18:34.814 --- 10.0.0.1 ping statistics --- 00:18:34.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.814 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.815 19:09:01 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81001 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81001 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81001 ']' 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.815 19:09:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.815 [2024-07-15 19:09:02.059271] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:18:34.815 [2024-07-15 19:09:02.059361] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.072 [2024-07-15 19:09:02.195924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.072 [2024-07-15 19:09:02.314206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.072 [2024-07-15 19:09:02.314265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.072 [2024-07-15 19:09:02.314277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.072 [2024-07-15 19:09:02.314286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.072 [2024-07-15 19:09:02.314293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.072 [2024-07-15 19:09:02.314706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.072 [2024-07-15 19:09:02.314722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.331 [2024-07-15 19:09:02.368033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81001 00:18:35.897 19:09:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.155 [2024-07-15 19:09:03.387411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.155 19:09:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:36.411 Malloc0 00:18:36.411 19:09:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:36.668 19:09:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.925 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.183 [2024-07-15 19:09:04.354738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.183 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:37.440 [2024-07-15 19:09:04.578876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:37.440 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81057 00:18:37.440 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.440 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:37.440 19:09:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81057 /var/tmp/bdevperf.sock 00:18:37.440 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81057 ']' 00:18:37.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.441 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.441 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.441 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.441 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.441 19:09:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:38.374 19:09:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.374 19:09:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:38.374 19:09:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:38.631 19:09:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:39.196 Nvme0n1 00:18:39.196 19:09:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:39.454 Nvme0n1 00:18:39.454 19:09:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:39.454 19:09:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:40.388 19:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:40.388 19:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:40.645 19:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:40.904 19:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:40.904 19:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:40.904 19:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81102 00:18:40.904 19:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.469 Attaching 4 probes... 00:18:47.469 @path[10.0.0.2, 4421]: 17363 00:18:47.469 @path[10.0.0.2, 4421]: 17760 00:18:47.469 @path[10.0.0.2, 4421]: 18077 00:18:47.469 @path[10.0.0.2, 4421]: 18115 00:18:47.469 @path[10.0.0.2, 4421]: 18079 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81102 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:47.469 19:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:48.033 19:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:48.033 19:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81219 00:18:48.033 19:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.033 19:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.593 Attaching 4 probes... 00:18:54.593 @path[10.0.0.2, 4420]: 17863 00:18:54.593 @path[10.0.0.2, 4420]: 18093 00:18:54.593 @path[10.0.0.2, 4420]: 18144 00:18:54.593 @path[10.0.0.2, 4420]: 18312 00:18:54.593 @path[10.0.0.2, 4420]: 18239 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81219 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81333 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:54.593 19:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:01.144 19:09:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:01.144 19:09:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.144 Attaching 4 probes... 00:19:01.144 @path[10.0.0.2, 4421]: 13136 00:19:01.144 @path[10.0.0.2, 4421]: 17883 00:19:01.144 @path[10.0.0.2, 4421]: 17907 00:19:01.144 @path[10.0.0.2, 4421]: 17928 00:19:01.144 @path[10.0.0.2, 4421]: 17831 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81333 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:01.144 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:01.441 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:01.441 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81444 00:19:01.441 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:01.441 19:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.993 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:07.993 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:07.993 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.994 Attaching 4 probes... 00:19:07.994 00:19:07.994 00:19:07.994 00:19:07.994 00:19:07.994 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81444 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:07.994 19:09:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:07.994 19:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:08.250 19:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:08.250 19:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81558 00:19:08.250 19:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:08.250 19:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.806 Attaching 4 probes... 00:19:14.806 @path[10.0.0.2, 4421]: 17196 00:19:14.806 @path[10.0.0.2, 4421]: 17621 00:19:14.806 @path[10.0.0.2, 4421]: 17616 00:19:14.806 @path[10.0.0.2, 4421]: 17500 00:19:14.806 @path[10.0.0.2, 4421]: 17273 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81558 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:14.806 19:09:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:15.743 19:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:15.743 19:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81676 00:19:15.743 19:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:15.743 19:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:22.303 19:09:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:22.303 19:09:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.303 Attaching 4 probes... 00:19:22.303 @path[10.0.0.2, 4420]: 17202 00:19:22.303 @path[10.0.0.2, 4420]: 17378 00:19:22.303 @path[10.0.0.2, 4420]: 17372 00:19:22.303 @path[10.0.0.2, 4420]: 17472 00:19:22.303 @path[10.0.0.2, 4420]: 17513 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81676 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:22.303 [2024-07-15 19:09:49.404270] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:22.303 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:22.636 19:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:29.198 19:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:29.198 19:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81856 00:19:29.198 19:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81001 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:29.198 19:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:34.500 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:34.500 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.781 Attaching 4 probes... 00:19:34.781 @path[10.0.0.2, 4421]: 17017 00:19:34.781 @path[10.0.0.2, 4421]: 17326 00:19:34.781 @path[10.0.0.2, 4421]: 17395 00:19:34.781 @path[10.0.0.2, 4421]: 17352 00:19:34.781 @path[10.0.0.2, 4421]: 17342 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81856 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81057 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81057 ']' 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81057 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.781 19:10:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81057 00:19:34.781 killing process with pid 81057 00:19:34.781 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:34.781 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:34.781 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81057' 00:19:34.781 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81057 00:19:34.781 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81057 00:19:35.046 Connection closed with partial response: 00:19:35.046 00:19:35.046 00:19:35.046 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81057 00:19:35.046 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:35.046 [2024-07-15 19:09:04.646550] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:19:35.046 [2024-07-15 19:09:04.646658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81057 ] 00:19:35.046 [2024-07-15 19:09:04.776820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.046 [2024-07-15 19:09:04.881740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.047 [2024-07-15 19:09:04.933493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:35.047 Running I/O for 90 seconds... 00:19:35.047 [2024-07-15 19:09:15.035449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.035947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.047 [2024-07-15 19:09:15.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.036963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.036994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.037008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.037029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.037043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.037064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.037078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:35.047 [2024-07-15 19:09:15.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.047 [2024-07-15 19:09:15.037112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.037683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.037976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.037990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.048 [2024-07-15 19:09:15.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.048 [2024-07-15 19:09:15.038664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:35.048 [2024-07-15 19:09:15.038685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.038699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.038741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.038984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.038998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.039398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.039944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.039958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.049 [2024-07-15 19:09:15.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.041652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.041689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.041740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.041778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:35.049 [2024-07-15 19:09:15.041799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.049 [2024-07-15 19:09:15.041813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:15.041834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:15.041848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:15.041869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:15.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:15.041920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:15.041947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:15.041971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:15.041986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:15.042008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:15.042022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.568678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.568765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.568826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.568846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.568869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.568884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.568904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.568919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.568939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.568980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.569018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.569052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.050 [2024-07-15 19:09:21.569086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.050 [2024-07-15 19:09:21.569629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:35.050 [2024-07-15 19:09:21.569649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.569949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.569976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.569992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.051 [2024-07-15 19:09:21.570238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.570273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.570315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.570352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.051 [2024-07-15 19:09:21.570373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.051 [2024-07-15 19:09:21.570387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.570826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.570862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.570897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.570931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.570966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.570986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.052 [2024-07-15 19:09:21.571447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:35.052 [2024-07-15 19:09:21.571468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.052 [2024-07-15 19:09:21.571482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.571985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.571998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.053 [2024-07-15 19:09:21.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.572358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.572393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.053 [2024-07-15 19:09:21.572512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:35.053 [2024-07-15 19:09:21.572535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.572874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.572888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.054 [2024-07-15 19:09:21.573616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.573946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.573992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:21.574362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:21.574377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:28.541591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.054 [2024-07-15 19:09:28.541664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:35.054 [2024-07-15 19:09:28.541725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.541978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.541992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.055 [2024-07-15 19:09:28.542718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:35.055 [2024-07-15 19:09:28.542774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.055 [2024-07-15 19:09:28.542788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.542822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.542858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.542898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.542934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.542968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.542989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.056 [2024-07-15 19:09:28.543356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.056 [2024-07-15 19:09:28.543799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.056 [2024-07-15 19:09:28.543820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.543834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.543854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.543868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.543889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.543910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.543932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.543947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.543968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.543982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.057 [2024-07-15 19:09:28.544333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.057 [2024-07-15 19:09:28.544691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:35.057 [2024-07-15 19:09:28.544713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.544964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.544989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.058 [2024-07-15 19:09:28.545546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:35.058 [2024-07-15 19:09:28.545883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.058 [2024-07-15 19:09:28.545897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.545918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.545931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.545951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.545965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.545986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.545999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.546033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.546068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:28.546755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.546819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.546875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.546965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.546995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.547009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.547038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.547053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.547083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.547098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:28.547144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:28.547164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.856984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.059 [2024-07-15 19:09:41.857001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:35.059 [2024-07-15 19:09:41.857365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.059 [2024-07-15 19:09:41.857384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.857772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.857870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.857909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.857944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.857980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.857998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.060 [2024-07-15 19:09:41.858729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.858974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.858992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.859009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.859028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.859045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.859063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.060 [2024-07-15 19:09:41.859079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.060 [2024-07-15 19:09:41.859097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.859602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.859980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.859997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.061 [2024-07-15 19:09:41.860466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.061 [2024-07-15 19:09:41.860516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.061 [2024-07-15 19:09:41.860537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.062 [2024-07-15 19:09:41.860799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.860844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.860879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.860914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.860949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.860966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.860984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.861018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.062 [2024-07-15 19:09:41.861062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18350b0 is same with the state(5) to be set 00:19:35.062 [2024-07-15 19:09:41.861102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92344 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.062 [2024-07-15 19:09:41.861608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.062 [2024-07-15 19:09:41.861621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:19:35.062 [2024-07-15 19:09:41.861636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861708] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18350b0 was disconnected and freed. reset controller. 00:19:35.062 [2024-07-15 19:09:41.861882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.062 [2024-07-15 19:09:41.861912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.062 [2024-07-15 19:09:41.861948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.062 [2024-07-15 19:09:41.861981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.062 [2024-07-15 19:09:41.861998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.063 [2024-07-15 19:09:41.862014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.063 [2024-07-15 19:09:41.862032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.063 [2024-07-15 19:09:41.862049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.063 [2024-07-15 19:09:41.862074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836a70 is same with the state(5) to be set 00:19:35.063 [2024-07-15 19:09:41.863272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.063 [2024-07-15 19:09:41.863322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836a70 (9): Bad file descriptor 00:19:35.063 [2024-07-15 19:09:41.863825] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.063 [2024-07-15 19:09:41.863863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1836a70 with addr=10.0.0.2, port=4421 00:19:35.063 [2024-07-15 19:09:41.863883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836a70 is same with the state(5) to be set 00:19:35.063 [2024-07-15 19:09:41.863969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836a70 (9): Bad file descriptor 00:19:35.063 [2024-07-15 19:09:41.864013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.063 [2024-07-15 19:09:41.864034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.063 [2024-07-15 19:09:41.864066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.063 [2024-07-15 19:09:41.864107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.063 [2024-07-15 19:09:41.864129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.063 [2024-07-15 19:09:51.925169] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.063 Received shutdown signal, test time was about 55.293373 seconds 00:19:35.063 00:19:35.063 Latency(us) 00:19:35.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.063 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.063 Verification LBA range: start 0x0 length 0x4000 00:19:35.063 Nvme0n1 : 55.29 7513.09 29.35 0.00 0.00 17002.35 1236.25 7015926.69 00:19:35.063 =================================================================================================================== 00:19:35.063 Total : 7513.09 29.35 0.00 0.00 17002.35 1236.25 7015926.69 00:19:35.063 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:35.320 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.321 rmmod nvme_tcp 00:19:35.321 rmmod nvme_fabrics 00:19:35.321 rmmod nvme_keyring 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81001 ']' 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81001 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81001 ']' 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81001 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81001 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81001' 00:19:35.321 killing process with pid 81001 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81001 00:19:35.321 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81001 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.578 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.837 19:10:02 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:35.837 00:19:35.837 real 1m1.348s 00:19:35.837 user 2m49.578s 00:19:35.837 sys 0m18.891s 00:19:35.837 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.837 19:10:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:35.837 ************************************ 00:19:35.837 END TEST nvmf_host_multipath 00:19:35.837 ************************************ 00:19:35.837 19:10:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.837 19:10:02 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:35.837 19:10:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.837 19:10:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.837 19:10:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.837 ************************************ 00:19:35.837 START TEST nvmf_timeout 00:19:35.837 ************************************ 00:19:35.837 19:10:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:35.837 * Looking for test storage... 00:19:35.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.837 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.837 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:35.837 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.837 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:35.838 Cannot find device "nvmf_tgt_br" 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.838 Cannot find device "nvmf_tgt_br2" 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:35.838 Cannot find device "nvmf_tgt_br" 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:35.838 Cannot find device "nvmf_tgt_br2" 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:35.838 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.095 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:36.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:36.095 00:19:36.095 --- 10.0.0.2 ping statistics --- 00:19:36.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.096 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:36.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:36.096 00:19:36.096 --- 10.0.0.3 ping statistics --- 00:19:36.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.096 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:36.096 00:19:36.096 --- 10.0.0.1 ping statistics --- 00:19:36.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.096 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82159 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82159 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82159 ']' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.096 19:10:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:36.352 [2024-07-15 19:10:03.429200] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:19:36.352 [2024-07-15 19:10:03.429292] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.352 [2024-07-15 19:10:03.571030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.610 [2024-07-15 19:10:03.696011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.610 [2024-07-15 19:10:03.696077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.610 [2024-07-15 19:10:03.696092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.610 [2024-07-15 19:10:03.696102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.610 [2024-07-15 19:10:03.696111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.610 [2024-07-15 19:10:03.697055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.610 [2024-07-15 19:10:03.697079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.610 [2024-07-15 19:10:03.752724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:37.177 19:10:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.177 19:10:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:37.177 19:10:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.177 19:10:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.177 19:10:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.434 19:10:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.434 19:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.434 19:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.434 [2024-07-15 19:10:04.706730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.691 19:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:37.949 Malloc0 00:19:37.949 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.207 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.464 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.464 [2024-07-15 19:10:05.734540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.464 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:38.464 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82214 00:19:38.464 19:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82214 /var/tmp/bdevperf.sock 00:19:38.721 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82214 ']' 00:19:38.721 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.721 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.722 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.722 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.722 19:10:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:38.722 [2024-07-15 19:10:05.795678] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:19:38.722 [2024-07-15 19:10:05.795761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82214 ] 00:19:38.722 [2024-07-15 19:10:05.932459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.979 [2024-07-15 19:10:06.055962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.979 [2024-07-15 19:10:06.111951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.543 19:10:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.543 19:10:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:39.543 19:10:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:39.799 19:10:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:40.363 NVMe0n1 00:19:40.363 19:10:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82232 00:19:40.363 19:10:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.363 19:10:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:40.363 Running I/O for 10 seconds... 00:19:41.298 19:10:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.559 [2024-07-15 19:10:08.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.673956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.673987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.559 [2024-07-15 19:10:08.673996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.674016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.674038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.674058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.674085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.559 [2024-07-15 19:10:08.674105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.559 [2024-07-15 19:10:08.674117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.560 [2024-07-15 19:10:08.674166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.560 [2024-07-15 19:10:08.674975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.560 [2024-07-15 19:10:08.674985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.674994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.561 [2024-07-15 19:10:08.675864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.561 [2024-07-15 19:10:08.675873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.562 [2024-07-15 19:10:08.675894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.562 [2024-07-15 19:10:08.675914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.562 [2024-07-15 19:10:08.675934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.562 [2024-07-15 19:10:08.675954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.562 [2024-07-15 19:10:08.675974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.675984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb139a0 is same with the state(5) to be set 00:19:41.562 [2024-07-15 19:10:08.675997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.562 [2024-07-15 19:10:08.676004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.562 [2024-07-15 19:10:08.676018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62480 len:8 PRP1 0x0 PRP2 0x0 00:19:41.562 [2024-07-15 19:10:08.676028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.562 [2024-07-15 19:10:08.676083] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb139a0 was disconnected and freed. reset controller. 00:19:41.562 [2024-07-15 19:10:08.676346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.562 [2024-07-15 19:10:08.676429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2ee0 (9): Bad file descriptor 00:19:41.562 [2024-07-15 19:10:08.676548] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.562 [2024-07-15 19:10:08.676569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac2ee0 with addr=10.0.0.2, port=4420 00:19:41.562 [2024-07-15 19:10:08.676580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac2ee0 is same with the state(5) to be set 00:19:41.562 [2024-07-15 19:10:08.676599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2ee0 (9): Bad file descriptor 00:19:41.562 [2024-07-15 19:10:08.676615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.562 [2024-07-15 19:10:08.676625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.562 [2024-07-15 19:10:08.676635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.562 [2024-07-15 19:10:08.676655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.562 [2024-07-15 19:10:08.676666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.562 19:10:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:43.464 [2024-07-15 19:10:10.676968] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.464 [2024-07-15 19:10:10.677063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac2ee0 with addr=10.0.0.2, port=4420 00:19:43.464 [2024-07-15 19:10:10.677081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac2ee0 is same with the state(5) to be set 00:19:43.464 [2024-07-15 19:10:10.677111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2ee0 (9): Bad file descriptor 00:19:43.464 [2024-07-15 19:10:10.677144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.464 [2024-07-15 19:10:10.677156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.464 [2024-07-15 19:10:10.677168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.464 [2024-07-15 19:10:10.677196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.464 [2024-07-15 19:10:10.677209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.464 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:43.464 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.464 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:43.722 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:43.722 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:43.722 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:43.722 19:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:43.980 19:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:43.980 19:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:45.880 [2024-07-15 19:10:12.677517] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.880 [2024-07-15 19:10:12.677612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac2ee0 with addr=10.0.0.2, port=4420 00:19:45.880 [2024-07-15 19:10:12.677639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac2ee0 is same with the state(5) to be set 00:19:45.880 [2024-07-15 19:10:12.677681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac2ee0 (9): Bad file descriptor 00:19:45.880 [2024-07-15 19:10:12.677714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.880 [2024-07-15 19:10:12.677732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.880 [2024-07-15 19:10:12.677750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.880 [2024-07-15 19:10:12.677791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.880 [2024-07-15 19:10:12.677812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.779 [2024-07-15 19:10:14.677874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.779 [2024-07-15 19:10:14.677925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.779 [2024-07-15 19:10:14.677939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.779 [2024-07-15 19:10:14.677950] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:47.779 [2024-07-15 19:10:14.677979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.853 00:19:48.853 Latency(us) 00:19:48.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.853 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.853 Verification LBA range: start 0x0 length 0x4000 00:19:48.853 NVMe0n1 : 8.10 950.50 3.71 15.80 0.00 132267.97 3991.74 7015926.69 00:19:48.853 =================================================================================================================== 00:19:48.853 Total : 950.50 3.71 15.80 0.00 132267.97 3991.74 7015926.69 00:19:48.853 0 00:19:49.112 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:49.112 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:49.112 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:49.371 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:49.371 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:49.371 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:49.371 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82232 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82214 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82214 ']' 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82214 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82214 00:19:49.630 killing process with pid 82214 00:19:49.630 Received shutdown signal, test time was about 9.176864 seconds 00:19:49.630 00:19:49.630 Latency(us) 00:19:49.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.630 =================================================================================================================== 00:19:49.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82214' 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82214 00:19:49.630 19:10:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82214 00:19:49.888 19:10:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.888 [2024-07-15 19:10:17.170687] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82354 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82354 /var/tmp/bdevperf.sock 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82354 ']' 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.148 19:10:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.148 [2024-07-15 19:10:17.242661] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:19:50.148 [2024-07-15 19:10:17.242758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82354 ] 00:19:50.148 [2024-07-15 19:10:17.379873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.407 [2024-07-15 19:10:17.487434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.407 [2024-07-15 19:10:17.539699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:50.973 19:10:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.973 19:10:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:50.973 19:10:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:51.251 19:10:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:51.508 NVMe0n1 00:19:51.508 19:10:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82372 00:19:51.508 19:10:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.508 19:10:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:51.766 Running I/O for 10 seconds... 00:19:52.704 19:10:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.704 [2024-07-15 19:10:19.960245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.704 [2024-07-15 19:10:19.960577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.960995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03620 is same with the state(5) to be set 00:19:52.705 [2024-07-15 19:10:19.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.961986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.961997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.705 [2024-07-15 19:10:19.962006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.705 [2024-07-15 19:10:19.962018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.706 [2024-07-15 19:10:19.962917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.706 [2024-07-15 19:10:19.962926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.962937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.962945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.962958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.962967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.962979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.962988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.962999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.707 [2024-07-15 19:10:19.963662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.707 [2024-07-15 19:10:19.963809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.707 [2024-07-15 19:10:19.963820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.708 [2024-07-15 19:10:19.963966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.708 [2024-07-15 19:10:19.963986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.963996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a9a0 is same with the state(5) to be set 00:19:52.708 [2024-07-15 19:10:19.964008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.708 [2024-07-15 19:10:19.964456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.708 [2024-07-15 19:10:19.964463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.708 [2024-07-15 19:10:19.964470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:19:52.708 [2024-07-15 19:10:19.964479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.964488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.964495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 [2024-07-15 19:10:19.964517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.964527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.964536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.964543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 19:10:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:52.709 [2024-07-15 19:10:19.979137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.979172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.979198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 [2024-07-15 19:10:19.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62872 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.979217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.979234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 [2024-07-15 19:10:19.979241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.979250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.979268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 [2024-07-15 19:10:19.979275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.979284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.709 [2024-07-15 19:10:19.979301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.709 [2024-07-15 19:10:19.979308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:19:52.709 [2024-07-15 19:10:19.979317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979385] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e3a9a0 was disconnected and freed. reset controller. 00:19:52.709 [2024-07-15 19:10:19.979518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.709 [2024-07-15 19:10:19.979536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.709 [2024-07-15 19:10:19.979557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.709 [2024-07-15 19:10:19.979577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.709 [2024-07-15 19:10:19.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.709 [2024-07-15 19:10:19.979603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:19:52.709 [2024-07-15 19:10:19.979840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.709 [2024-07-15 19:10:19.979874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:19:52.709 [2024-07-15 19:10:19.979974] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.709 [2024-07-15 19:10:19.980003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:19:52.709 [2024-07-15 19:10:19.980015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:19:52.709 [2024-07-15 19:10:19.980033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:19:52.709 [2024-07-15 19:10:19.980049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.709 [2024-07-15 19:10:19.980058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.709 [2024-07-15 19:10:19.980068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.709 [2024-07-15 19:10:19.980088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.709 [2024-07-15 19:10:19.980098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.082 [2024-07-15 19:10:20.980249] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.082 [2024-07-15 19:10:20.980328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:19:54.082 [2024-07-15 19:10:20.980347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:19:54.082 [2024-07-15 19:10:20.980377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:19:54.082 [2024-07-15 19:10:20.980397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.082 [2024-07-15 19:10:20.980407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.082 [2024-07-15 19:10:20.980418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.082 [2024-07-15 19:10:20.980446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.082 [2024-07-15 19:10:20.980458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.082 19:10:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.082 [2024-07-15 19:10:21.234752] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.082 19:10:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82372 00:19:55.015 [2024-07-15 19:10:21.998777] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.596 00:20:01.596 Latency(us) 00:20:01.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.596 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.596 Verification LBA range: start 0x0 length 0x4000 00:20:01.596 NVMe0n1 : 10.01 6443.27 25.17 0.00 0.00 19834.65 1392.64 3050402.91 00:20:01.596 =================================================================================================================== 00:20:01.596 Total : 6443.27 25.17 0.00 0.00 19834.65 1392.64 3050402.91 00:20:01.596 0 00:20:01.854 19:10:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82482 00:20:01.854 19:10:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.854 19:10:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:01.854 Running I/O for 10 seconds... 00:20:02.791 19:10:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.054 [2024-07-15 19:10:30.150105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.054 [2024-07-15 19:10:30.150457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.150997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00bb0 is same with the state(5) to be set 00:20:03.055 [2024-07-15 19:10:30.151106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.055 [2024-07-15 19:10:30.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.055 [2024-07-15 19:10:30.151310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.151982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.151993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.056 [2024-07-15 19:10:30.152173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.056 [2024-07-15 19:10:30.152182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.152984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.152994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.153014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.153023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.153033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.057 [2024-07-15 19:10:30.153053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.057 [2024-07-15 19:10:30.153062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.058 [2024-07-15 19:10:30.153713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.058 [2024-07-15 19:10:30.153735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.058 [2024-07-15 19:10:30.153745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67550 is same with the state(5) to be set 00:20:03.058 [2024-07-15 19:10:30.153757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.058 [2024-07-15 19:10:30.153765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.059 [2024-07-15 19:10:30.153773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:20:03.059 [2024-07-15 19:10:30.153782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.059 [2024-07-15 19:10:30.153838] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e67550 was disconnected and freed. reset controller. 00:20:03.059 [2024-07-15 19:10:30.154061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.059 [2024-07-15 19:10:30.154146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:20:03.059 [2024-07-15 19:10:30.154264] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.059 [2024-07-15 19:10:30.154295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:20:03.059 [2024-07-15 19:10:30.154307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:20:03.059 [2024-07-15 19:10:30.154326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:20:03.059 [2024-07-15 19:10:30.154341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.059 [2024-07-15 19:10:30.154351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:03.059 [2024-07-15 19:10:30.154361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.059 [2024-07-15 19:10:30.154381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.059 [2024-07-15 19:10:30.154393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.059 19:10:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:04.029 [2024-07-15 19:10:31.154536] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.029 [2024-07-15 19:10:31.154601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:20:04.029 [2024-07-15 19:10:31.154618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:20:04.029 [2024-07-15 19:10:31.154646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:20:04.029 [2024-07-15 19:10:31.154679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.029 [2024-07-15 19:10:31.154699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.029 [2024-07-15 19:10:31.154710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.029 [2024-07-15 19:10:31.154736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.029 [2024-07-15 19:10:31.154748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.964 [2024-07-15 19:10:32.154903] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.964 [2024-07-15 19:10:32.154978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:20:04.964 [2024-07-15 19:10:32.154995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:20:04.964 [2024-07-15 19:10:32.155023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:20:04.964 [2024-07-15 19:10:32.155055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.964 [2024-07-15 19:10:32.155067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.964 [2024-07-15 19:10:32.155079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.964 [2024-07-15 19:10:32.155105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.964 [2024-07-15 19:10:32.155116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.899 [2024-07-15 19:10:33.158762] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.899 [2024-07-15 19:10:33.158827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de9ee0 with addr=10.0.0.2, port=4420 00:20:05.899 [2024-07-15 19:10:33.158843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de9ee0 is same with the state(5) to be set 00:20:05.899 [2024-07-15 19:10:33.159093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9ee0 (9): Bad file descriptor 00:20:05.899 [2024-07-15 19:10:33.159337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.899 [2024-07-15 19:10:33.159358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.899 [2024-07-15 19:10:33.159370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.899 [2024-07-15 19:10:33.163196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.899 [2024-07-15 19:10:33.163226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.899 19:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.156 [2024-07-15 19:10:33.382928] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.156 19:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82482 00:20:07.105 [2024-07-15 19:10:34.196887] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.366 00:20:12.366 Latency(us) 00:20:12.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.366 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.366 Verification LBA range: start 0x0 length 0x4000 00:20:12.366 NVMe0n1 : 10.01 5412.45 21.14 3682.49 0.00 14038.29 677.70 3019898.88 00:20:12.366 =================================================================================================================== 00:20:12.366 Total : 5412.45 21.14 3682.49 0.00 14038.29 0.00 3019898.88 00:20:12.366 0 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82354 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82354 ']' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82354 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82354 00:20:12.366 killing process with pid 82354 00:20:12.366 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.366 00:20:12.366 Latency(us) 00:20:12.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.366 =================================================================================================================== 00:20:12.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82354' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82354 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82354 00:20:12.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82598 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82598 /var/tmp/bdevperf.sock 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82598 ']' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.366 19:10:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:12.366 [2024-07-15 19:10:39.341554] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:12.366 [2024-07-15 19:10:39.341643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82598 ] 00:20:12.366 [2024-07-15 19:10:39.479253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.366 [2024-07-15 19:10:39.586657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.366 [2024-07-15 19:10:39.638721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:13.300 19:10:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.300 19:10:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:13.300 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82614 00:20:13.300 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82598 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:13.300 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:13.558 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:13.816 NVMe0n1 00:20:13.816 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82654 00:20:13.816 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.816 19:10:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:13.816 Running I/O for 10 seconds... 00:20:14.749 19:10:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.010 [2024-07-15 19:10:42.233124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.010 [2024-07-15 19:10:42.233254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.010 [2024-07-15 19:10:42.233271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.010 [2024-07-15 19:10:42.233288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.010 [2024-07-15 19:10:42.233297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:15.010 [2024-07-15 19:10:42.233306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.010 [2024-07-15 19:10:42.233314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with t[2024-07-15 19:10:42.233323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:20:15.010 id:0 cdw10:00000000 cdw11:00000000 00:20:15.010 [2024-07-15 19:10:42.233332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.010 [2024-07-15 19:10:42.233340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c53da0 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.010 [2024-07-15 19:10:42.233488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.233994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06b80 is same with the state(5) to be set 00:20:15.011 [2024-07-15 19:10:42.234251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.011 [2024-07-15 19:10:42.234270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.011 [2024-07-15 19:10:42.234290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.011 [2024-07-15 19:10:42.234301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.011 [2024-07-15 19:10:42.234312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.234986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.234997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.012 [2024-07-15 19:10:42.235177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.012 [2024-07-15 19:10:42.235186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.235979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.236000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.236020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.236030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.013 [2024-07-15 19:10:42.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.013 [2024-07-15 19:10:42.236050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.014 [2024-07-15 19:10:42.236789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.014 [2024-07-15 19:10:42.236798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.015 [2024-07-15 19:10:42.236829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.015 [2024-07-15 19:10:42.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.015 [2024-07-15 19:10:42.236870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:15.015 [2024-07-15 19:10:42.236890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4640 is same with the state(5) to be set 00:20:15.015 [2024-07-15 19:10:42.236915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:15.015 [2024-07-15 19:10:42.236923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:15.015 [2024-07-15 19:10:42.236931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44216 len:8 PRP1 0x0 PRP2 0x0 00:20:15.015 [2024-07-15 19:10:42.236944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:15.015 [2024-07-15 19:10:42.236997] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca4640 was disconnected and freed. reset controller. 00:20:15.015 [2024-07-15 19:10:42.237272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.015 [2024-07-15 19:10:42.237305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53da0 (9): Bad file descriptor 00:20:15.015 [2024-07-15 19:10:42.237412] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:15.015 [2024-07-15 19:10:42.237443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c53da0 with addr=10.0.0.2, port=4420 00:20:15.015 [2024-07-15 19:10:42.237455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c53da0 is same with the state(5) to be set 00:20:15.015 [2024-07-15 19:10:42.237473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53da0 (9): Bad file descriptor 00:20:15.015 [2024-07-15 19:10:42.237490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.015 [2024-07-15 19:10:42.237512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:15.015 [2024-07-15 19:10:42.237524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.015 [2024-07-15 19:10:42.237545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:15.015 [2024-07-15 19:10:42.237555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.015 19:10:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82654 00:20:17.543 [2024-07-15 19:10:44.237845] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:17.543 [2024-07-15 19:10:44.237904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c53da0 with addr=10.0.0.2, port=4420 00:20:17.543 [2024-07-15 19:10:44.237922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c53da0 is same with the state(5) to be set 00:20:17.543 [2024-07-15 19:10:44.237950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53da0 (9): Bad file descriptor 00:20:17.543 [2024-07-15 19:10:44.237994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:17.543 [2024-07-15 19:10:44.238005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:17.543 [2024-07-15 19:10:44.238021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:17.543 [2024-07-15 19:10:44.238049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:17.543 [2024-07-15 19:10:44.238060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.472 [2024-07-15 19:10:46.238335] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.472 [2024-07-15 19:10:46.238406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c53da0 with addr=10.0.0.2, port=4420 00:20:19.472 [2024-07-15 19:10:46.238423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c53da0 is same with the state(5) to be set 00:20:19.472 [2024-07-15 19:10:46.238452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53da0 (9): Bad file descriptor 00:20:19.472 [2024-07-15 19:10:46.238473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.472 [2024-07-15 19:10:46.238483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.472 [2024-07-15 19:10:46.238495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.472 [2024-07-15 19:10:46.238535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.472 [2024-07-15 19:10:46.238548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.371 [2024-07-15 19:10:48.238659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.371 [2024-07-15 19:10:48.238719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.371 [2024-07-15 19:10:48.238732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.371 [2024-07-15 19:10:48.238743] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:21.371 [2024-07-15 19:10:48.238771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.307 00:20:22.307 Latency(us) 00:20:22.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.307 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:22.307 NVMe0n1 : 8.15 2119.91 8.28 15.71 0.00 59834.99 1563.93 7015926.69 00:20:22.307 =================================================================================================================== 00:20:22.307 Total : 2119.91 8.28 15.71 0.00 59834.99 1563.93 7015926.69 00:20:22.307 0 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:22.307 Attaching 5 probes... 00:20:22.307 1387.663824: reset bdev controller NVMe0 00:20:22.307 1387.743569: reconnect bdev controller NVMe0 00:20:22.307 3388.118737: reconnect delay bdev controller NVMe0 00:20:22.307 3388.139974: reconnect bdev controller NVMe0 00:20:22.307 5388.584269: reconnect delay bdev controller NVMe0 00:20:22.307 5388.610938: reconnect bdev controller NVMe0 00:20:22.307 7389.032604: reconnect delay bdev controller NVMe0 00:20:22.307 7389.054391: reconnect bdev controller NVMe0 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82614 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82598 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82598 ']' 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82598 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82598 00:20:22.307 killing process with pid 82598 00:20:22.307 Received shutdown signal, test time was about 8.203960 seconds 00:20:22.307 00:20:22.307 Latency(us) 00:20:22.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.307 =================================================================================================================== 00:20:22.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82598' 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82598 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82598 00:20:22.307 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.566 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.825 rmmod nvme_tcp 00:20:22.825 rmmod nvme_fabrics 00:20:22.825 rmmod nvme_keyring 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82159 ']' 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82159 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82159 ']' 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82159 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82159 00:20:22.825 killing process with pid 82159 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82159' 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82159 00:20:22.825 19:10:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82159 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:23.084 00:20:23.084 real 0m47.292s 00:20:23.084 user 2m19.130s 00:20:23.084 sys 0m5.668s 00:20:23.084 ************************************ 00:20:23.084 END TEST nvmf_timeout 00:20:23.084 ************************************ 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.084 19:10:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:23.084 19:10:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.084 19:10:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:23.084 19:10:50 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:23.084 19:10:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.084 19:10:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.084 19:10:50 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:23.084 00:20:23.084 real 12m17.280s 00:20:23.084 user 29m58.632s 00:20:23.084 sys 3m2.229s 00:20:23.084 19:10:50 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.084 19:10:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.084 ************************************ 00:20:23.084 END TEST nvmf_tcp 00:20:23.084 ************************************ 00:20:23.084 19:10:50 -- common/autotest_common.sh@1142 -- # return 0 00:20:23.084 19:10:50 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:23.084 19:10:50 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:23.084 19:10:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:23.084 19:10:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.084 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:20:23.084 ************************************ 00:20:23.084 START TEST nvmf_dif 00:20:23.084 ************************************ 00:20:23.084 19:10:50 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:23.344 * Looking for test storage... 00:20:23.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.344 19:10:50 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.344 19:10:50 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.344 19:10:50 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.344 19:10:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.344 19:10:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.344 19:10:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.344 19:10:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:23.344 19:10:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:23.344 19:10:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.344 19:10:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:23.344 19:10:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:23.344 Cannot find device "nvmf_tgt_br" 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.344 Cannot find device "nvmf_tgt_br2" 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:23.344 Cannot find device "nvmf_tgt_br" 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:23.344 Cannot find device "nvmf_tgt_br2" 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.344 19:10:50 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:23.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:23.603 00:20:23.603 --- 10.0.0.2 ping statistics --- 00:20:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.603 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:23.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:20:23.603 00:20:23.603 --- 10.0.0.3 ping statistics --- 00:20:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.603 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:23.603 00:20:23.603 --- 10.0.0.1 ping statistics --- 00:20:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.603 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:23.603 19:10:50 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.862 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.862 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.862 19:10:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:23.862 19:10:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:23.862 19:10:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.862 19:10:51 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.862 19:10:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.121 19:10:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83089 00:20:24.121 19:10:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.121 19:10:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83089 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83089 ']' 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.121 19:10:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.121 [2024-07-15 19:10:51.203444] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:20:24.121 [2024-07-15 19:10:51.203541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.121 [2024-07-15 19:10:51.338907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.382 [2024-07-15 19:10:51.448456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.382 [2024-07-15 19:10:51.448522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.382 [2024-07-15 19:10:51.448535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.382 [2024-07-15 19:10:51.448544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.382 [2024-07-15 19:10:51.448551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.382 [2024-07-15 19:10:51.448581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.382 [2024-07-15 19:10:51.500785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:24.949 19:10:52 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.949 19:10:52 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:24.949 19:10:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.949 19:10:52 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.949 19:10:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 19:10:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.208 19:10:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:25.208 19:10:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 [2024-07-15 19:10:52.258217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.208 19:10:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 ************************************ 00:20:25.208 START TEST fio_dif_1_default 00:20:25.208 ************************************ 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 bdev_null0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 [2024-07-15 19:10:52.307125] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.208 { 00:20:25.208 "params": { 00:20:25.208 "name": "Nvme$subsystem", 00:20:25.208 "trtype": "$TEST_TRANSPORT", 00:20:25.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.208 "adrfam": "ipv4", 00:20:25.208 "trsvcid": "$NVMF_PORT", 00:20:25.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.208 "hdgst": ${hdgst:-false}, 00:20:25.208 "ddgst": ${ddgst:-false} 00:20:25.208 }, 00:20:25.208 "method": "bdev_nvme_attach_controller" 00:20:25.208 } 00:20:25.208 EOF 00:20:25.208 )") 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:25.208 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.209 "params": { 00:20:25.209 "name": "Nvme0", 00:20:25.209 "trtype": "tcp", 00:20:25.209 "traddr": "10.0.0.2", 00:20:25.209 "adrfam": "ipv4", 00:20:25.209 "trsvcid": "4420", 00:20:25.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.209 "hdgst": false, 00:20:25.209 "ddgst": false 00:20:25.209 }, 00:20:25.209 "method": "bdev_nvme_attach_controller" 00:20:25.209 }' 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.209 19:10:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.467 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:25.467 fio-3.35 00:20:25.467 Starting 1 thread 00:20:37.671 00:20:37.671 filename0: (groupid=0, jobs=1): err= 0: pid=83156: Mon Jul 15 19:11:03 2024 00:20:37.671 read: IOPS=8693, BW=34.0MiB/s (35.6MB/s)(340MiB/10001msec) 00:20:37.671 slat (usec): min=5, max=115, avg= 8.77, stdev= 3.11 00:20:37.671 clat (usec): min=346, max=2619, avg=434.45, stdev=30.67 00:20:37.671 lat (usec): min=353, max=2643, avg=443.22, stdev=31.30 00:20:37.671 clat percentiles (usec): 00:20:37.671 | 1.00th=[ 375], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:20:37.671 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:37.671 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 482], 00:20:37.671 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 537], 99.95th=[ 570], 00:20:37.671 | 99.99th=[ 635] 00:20:37.671 bw ( KiB/s): min=33952, max=35680, per=99.95%, avg=34757.05, stdev=431.47, samples=19 00:20:37.671 iops : min= 8488, max= 8920, avg=8689.26, stdev=107.87, samples=19 00:20:37.671 lat (usec) : 500=98.49%, 750=1.51% 00:20:37.671 lat (msec) : 2=0.01%, 4=0.01% 00:20:37.671 cpu : usr=84.87%, sys=13.33%, ctx=18, majf=0, minf=0 00:20:37.671 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.671 issued rwts: total=86944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.671 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:37.671 00:20:37.671 Run status group 0 (all jobs): 00:20:37.671 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=340MiB (356MB), run=10001-10001msec 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 ************************************ 00:20:37.671 END TEST fio_dif_1_default 00:20:37.671 ************************************ 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.671 00:20:37.671 real 0m11.015s 00:20:37.671 user 0m9.150s 00:20:37.671 sys 0m1.593s 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 19:11:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:37.671 19:11:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:37.671 19:11:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.671 19:11:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 ************************************ 00:20:37.671 START TEST fio_dif_1_multi_subsystems 00:20:37.671 ************************************ 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 bdev_null0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.671 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 [2024-07-15 19:11:03.369381] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 bdev_null1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.672 { 00:20:37.672 "params": { 00:20:37.672 "name": "Nvme$subsystem", 00:20:37.672 "trtype": "$TEST_TRANSPORT", 00:20:37.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.672 "adrfam": "ipv4", 00:20:37.672 "trsvcid": "$NVMF_PORT", 00:20:37.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.672 "hdgst": ${hdgst:-false}, 00:20:37.672 "ddgst": ${ddgst:-false} 00:20:37.672 }, 00:20:37.672 "method": "bdev_nvme_attach_controller" 00:20:37.672 } 00:20:37.672 EOF 00:20:37.672 )") 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.672 { 00:20:37.672 "params": { 00:20:37.672 "name": "Nvme$subsystem", 00:20:37.672 "trtype": "$TEST_TRANSPORT", 00:20:37.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.672 "adrfam": "ipv4", 00:20:37.672 "trsvcid": "$NVMF_PORT", 00:20:37.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.672 "hdgst": ${hdgst:-false}, 00:20:37.672 "ddgst": ${ddgst:-false} 00:20:37.672 }, 00:20:37.672 "method": "bdev_nvme_attach_controller" 00:20:37.672 } 00:20:37.672 EOF 00:20:37.672 )") 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.672 "params": { 00:20:37.672 "name": "Nvme0", 00:20:37.672 "trtype": "tcp", 00:20:37.672 "traddr": "10.0.0.2", 00:20:37.672 "adrfam": "ipv4", 00:20:37.672 "trsvcid": "4420", 00:20:37.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.672 "hdgst": false, 00:20:37.672 "ddgst": false 00:20:37.672 }, 00:20:37.672 "method": "bdev_nvme_attach_controller" 00:20:37.672 },{ 00:20:37.672 "params": { 00:20:37.672 "name": "Nvme1", 00:20:37.672 "trtype": "tcp", 00:20:37.672 "traddr": "10.0.0.2", 00:20:37.672 "adrfam": "ipv4", 00:20:37.672 "trsvcid": "4420", 00:20:37.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.672 "hdgst": false, 00:20:37.672 "ddgst": false 00:20:37.672 }, 00:20:37.672 "method": "bdev_nvme_attach_controller" 00:20:37.672 }' 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.672 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:37.673 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:37.673 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:37.673 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:37.673 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.673 19:11:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.673 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:37.673 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:37.673 fio-3.35 00:20:37.673 Starting 2 threads 00:20:47.642 00:20:47.642 filename0: (groupid=0, jobs=1): err= 0: pid=83315: Mon Jul 15 19:11:14 2024 00:20:47.642 read: IOPS=4834, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:20:47.642 slat (usec): min=7, max=105, avg=13.51, stdev= 3.33 00:20:47.642 clat (usec): min=661, max=1446, avg=790.88, stdev=38.10 00:20:47.642 lat (usec): min=669, max=1480, avg=804.39, stdev=39.24 00:20:47.642 clat percentiles (usec): 00:20:47.642 | 1.00th=[ 701], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 766], 00:20:47.642 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:47.642 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 832], 95.00th=[ 848], 00:20:47.642 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 938], 00:20:47.642 | 99.99th=[ 1020] 00:20:47.642 bw ( KiB/s): min=18912, max=19488, per=50.03%, avg=19349.89, stdev=151.62, samples=19 00:20:47.642 iops : min= 4728, max= 4872, avg=4837.47, stdev=37.91, samples=19 00:20:47.642 lat (usec) : 750=13.49%, 1000=86.49% 00:20:47.642 lat (msec) : 2=0.01% 00:20:47.642 cpu : usr=89.84%, sys=8.72%, ctx=118, majf=0, minf=9 00:20:47.642 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.642 issued rwts: total=48348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.642 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:47.642 filename1: (groupid=0, jobs=1): err= 0: pid=83316: Mon Jul 15 19:11:14 2024 00:20:47.642 read: IOPS=4834, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:20:47.642 slat (nsec): min=7252, max=55727, avg=13587.47, stdev=3237.12 00:20:47.642 clat (usec): min=444, max=1443, avg=790.08, stdev=26.75 00:20:47.642 lat (usec): min=454, max=1469, avg=803.67, stdev=27.10 00:20:47.642 clat percentiles (usec): 00:20:47.642 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 766], 00:20:47.642 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:47.642 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 832], 00:20:47.642 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 898], 99.95th=[ 914], 00:20:47.642 | 99.99th=[ 971] 00:20:47.642 bw ( KiB/s): min=18944, max=19488, per=50.03%, avg=19351.16, stdev=147.29, samples=19 00:20:47.642 iops : min= 4736, max= 4872, avg=4837.79, stdev=36.73, samples=19 00:20:47.642 lat (usec) : 500=0.01%, 750=5.05%, 1000=94.94% 00:20:47.642 lat (msec) : 2=0.01% 00:20:47.642 cpu : usr=89.91%, sys=8.71%, ctx=9, majf=0, minf=0 00:20:47.642 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.642 issued rwts: total=48352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.642 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:47.642 00:20:47.642 Run status group 0 (all jobs): 00:20:47.642 READ: bw=37.8MiB/s (39.6MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=378MiB (396MB), run=10001-10001msec 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 ************************************ 00:20:47.642 END TEST fio_dif_1_multi_subsystems 00:20:47.642 ************************************ 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.642 00:20:47.642 real 0m11.140s 00:20:47.642 user 0m18.727s 00:20:47.642 sys 0m2.051s 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 19:11:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:47.642 19:11:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:47.642 19:11:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:47.642 19:11:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.642 19:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:47.642 ************************************ 00:20:47.642 START TEST fio_dif_rand_params 00:20:47.642 ************************************ 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.642 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.643 bdev_null0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.643 [2024-07-15 19:11:14.557965] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.643 { 00:20:47.643 "params": { 00:20:47.643 "name": "Nvme$subsystem", 00:20:47.643 "trtype": "$TEST_TRANSPORT", 00:20:47.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.643 "adrfam": "ipv4", 00:20:47.643 "trsvcid": "$NVMF_PORT", 00:20:47.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.643 "hdgst": ${hdgst:-false}, 00:20:47.643 "ddgst": ${ddgst:-false} 00:20:47.643 }, 00:20:47.643 "method": "bdev_nvme_attach_controller" 00:20:47.643 } 00:20:47.643 EOF 00:20:47.643 )") 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.643 "params": { 00:20:47.643 "name": "Nvme0", 00:20:47.643 "trtype": "tcp", 00:20:47.643 "traddr": "10.0.0.2", 00:20:47.643 "adrfam": "ipv4", 00:20:47.643 "trsvcid": "4420", 00:20:47.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.643 "hdgst": false, 00:20:47.643 "ddgst": false 00:20:47.643 }, 00:20:47.643 "method": "bdev_nvme_attach_controller" 00:20:47.643 }' 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.643 19:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.643 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:47.643 ... 00:20:47.643 fio-3.35 00:20:47.643 Starting 3 threads 00:20:54.273 00:20:54.273 filename0: (groupid=0, jobs=1): err= 0: pid=83472: Mon Jul 15 19:11:20 2024 00:20:54.273 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5012msec) 00:20:54.273 slat (nsec): min=6756, max=43380, avg=14837.29, stdev=4626.36 00:20:54.273 clat (usec): min=10134, max=14483, avg=11488.38, stdev=203.77 00:20:54.273 lat (usec): min=10143, max=14525, avg=11503.22, stdev=204.23 00:20:54.273 clat percentiles (usec): 00:20:54.273 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:54.273 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:54.273 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:54.273 | 99.00th=[11863], 99.50th=[11994], 99.90th=[14484], 99.95th=[14484], 00:20:54.273 | 99.99th=[14484] 00:20:54.273 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33331.20, stdev=396.59, samples=10 00:20:54.273 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:20:54.273 lat (msec) : 20=100.00% 00:20:54.273 cpu : usr=91.30%, sys=8.12%, ctx=10, majf=0, minf=9 00:20:54.273 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.273 filename0: (groupid=0, jobs=1): err= 0: pid=83473: Mon Jul 15 19:11:20 2024 00:20:54.273 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5011msec) 00:20:54.273 slat (nsec): min=7712, max=40751, avg=15165.74, stdev=4207.82 00:20:54.273 clat (usec): min=11233, max=12942, avg=11483.35, stdev=129.87 00:20:54.273 lat (usec): min=11241, max=12965, avg=11498.51, stdev=130.26 00:20:54.273 clat percentiles (usec): 00:20:54.273 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:54.273 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:54.273 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:54.273 | 99.00th=[11863], 99.50th=[11863], 99.90th=[12911], 99.95th=[12911], 00:20:54.273 | 99.99th=[12911] 00:20:54.273 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33331.20, stdev=396.59, samples=10 00:20:54.273 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:20:54.273 lat (msec) : 20=100.00% 00:20:54.273 cpu : usr=91.28%, sys=7.84%, ctx=90, majf=0, minf=9 00:20:54.273 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.273 filename0: (groupid=0, jobs=1): err= 0: pid=83474: Mon Jul 15 19:11:20 2024 00:20:54.273 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5009msec) 00:20:54.273 slat (nsec): min=7766, max=39776, avg=15679.93, stdev=4066.92 00:20:54.273 clat (usec): min=10110, max=14681, avg=11478.92, stdev=206.70 00:20:54.273 lat (usec): min=10118, max=14706, avg=11494.60, stdev=206.99 00:20:54.273 clat percentiles (usec): 00:20:54.273 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:54.273 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:54.273 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:20:54.273 | 99.00th=[11863], 99.50th=[11863], 99.90th=[14615], 99.95th=[14746], 00:20:54.273 | 99.99th=[14746] 00:20:54.273 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33337.80, stdev=391.43, samples=10 00:20:54.273 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:20:54.273 lat (msec) : 20=100.00% 00:20:54.273 cpu : usr=91.33%, sys=8.15%, ctx=8, majf=0, minf=9 00:20:54.273 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.273 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:54.273 00:20:54.273 Run status group 0 (all jobs): 00:20:54.273 READ: bw=97.6MiB/s (102MB/s), 32.5MiB/s-32.6MiB/s (34.1MB/s-34.1MB/s), io=489MiB (513MB), run=5009-5012msec 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:54.273 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 bdev_null0 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 [2024-07-15 19:11:20.573419] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 bdev_null1 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 bdev_null2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.274 { 00:20:54.274 "params": { 00:20:54.274 "name": "Nvme$subsystem", 00:20:54.274 "trtype": "$TEST_TRANSPORT", 00:20:54.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.274 "adrfam": "ipv4", 00:20:54.274 "trsvcid": "$NVMF_PORT", 00:20:54.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.274 "hdgst": ${hdgst:-false}, 00:20:54.274 "ddgst": ${ddgst:-false} 00:20:54.274 }, 00:20:54.274 "method": "bdev_nvme_attach_controller" 00:20:54.274 } 00:20:54.274 EOF 00:20:54.274 )") 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.274 { 00:20:54.274 "params": { 00:20:54.274 "name": "Nvme$subsystem", 00:20:54.274 "trtype": "$TEST_TRANSPORT", 00:20:54.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.274 "adrfam": "ipv4", 00:20:54.274 "trsvcid": "$NVMF_PORT", 00:20:54.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.274 "hdgst": ${hdgst:-false}, 00:20:54.274 "ddgst": ${ddgst:-false} 00:20:54.274 }, 00:20:54.274 "method": "bdev_nvme_attach_controller" 00:20:54.274 } 00:20:54.274 EOF 00:20:54.274 )") 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:54.274 { 00:20:54.274 "params": { 00:20:54.274 "name": "Nvme$subsystem", 00:20:54.274 "trtype": "$TEST_TRANSPORT", 00:20:54.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.274 "adrfam": "ipv4", 00:20:54.274 "trsvcid": "$NVMF_PORT", 00:20:54.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.274 "hdgst": ${hdgst:-false}, 00:20:54.274 "ddgst": ${ddgst:-false} 00:20:54.274 }, 00:20:54.274 "method": "bdev_nvme_attach_controller" 00:20:54.274 } 00:20:54.274 EOF 00:20:54.274 )") 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:54.274 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:54.275 "params": { 00:20:54.275 "name": "Nvme0", 00:20:54.275 "trtype": "tcp", 00:20:54.275 "traddr": "10.0.0.2", 00:20:54.275 "adrfam": "ipv4", 00:20:54.275 "trsvcid": "4420", 00:20:54.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:54.275 "hdgst": false, 00:20:54.275 "ddgst": false 00:20:54.275 }, 00:20:54.275 "method": "bdev_nvme_attach_controller" 00:20:54.275 },{ 00:20:54.275 "params": { 00:20:54.275 "name": "Nvme1", 00:20:54.275 "trtype": "tcp", 00:20:54.275 "traddr": "10.0.0.2", 00:20:54.275 "adrfam": "ipv4", 00:20:54.275 "trsvcid": "4420", 00:20:54.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.275 "hdgst": false, 00:20:54.275 "ddgst": false 00:20:54.275 }, 00:20:54.275 "method": "bdev_nvme_attach_controller" 00:20:54.275 },{ 00:20:54.275 "params": { 00:20:54.275 "name": "Nvme2", 00:20:54.275 "trtype": "tcp", 00:20:54.275 "traddr": "10.0.0.2", 00:20:54.275 "adrfam": "ipv4", 00:20:54.275 "trsvcid": "4420", 00:20:54.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:54.275 "hdgst": false, 00:20:54.275 "ddgst": false 00:20:54.275 }, 00:20:54.275 "method": "bdev_nvme_attach_controller" 00:20:54.275 }' 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.275 19:11:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.275 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:54.275 ... 00:20:54.275 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:54.275 ... 00:20:54.275 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:54.275 ... 00:20:54.275 fio-3.35 00:20:54.275 Starting 24 threads 00:21:06.497 00:21:06.497 filename0: (groupid=0, jobs=1): err= 0: pid=83571: Mon Jul 15 19:11:31 2024 00:21:06.497 read: IOPS=223, BW=892KiB/s (914kB/s)(8932KiB/10011msec) 00:21:06.497 slat (usec): min=4, max=4025, avg=18.44, stdev=120.11 00:21:06.497 clat (msec): min=10, max=141, avg=71.62, stdev=19.36 00:21:06.497 lat (msec): min=10, max=142, avg=71.64, stdev=19.36 00:21:06.497 clat percentiles (msec): 00:21:06.497 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:21:06.497 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:06.497 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 110], 00:21:06.497 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 140], 00:21:06.497 | 99.99th=[ 142] 00:21:06.497 bw ( KiB/s): min= 640, max= 1048, per=4.05%, avg=878.32, stdev=125.21, samples=19 00:21:06.497 iops : min= 160, max= 262, avg=219.58, stdev=31.30, samples=19 00:21:06.497 lat (msec) : 20=0.54%, 50=15.54%, 100=74.47%, 250=9.45% 00:21:06.497 cpu : usr=37.30%, sys=2.15%, ctx=1094, majf=0, minf=9 00:21:06.497 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:06.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.497 filename0: (groupid=0, jobs=1): err= 0: pid=83572: Mon Jul 15 19:11:31 2024 00:21:06.497 read: IOPS=236, BW=945KiB/s (967kB/s)(9448KiB/10002msec) 00:21:06.497 slat (usec): min=4, max=8031, avg=25.70, stdev=260.88 00:21:06.497 clat (msec): min=2, max=156, avg=67.62, stdev=22.28 00:21:06.497 lat (msec): min=2, max=156, avg=67.65, stdev=22.28 00:21:06.497 clat percentiles (msec): 00:21:06.497 | 1.00th=[ 4], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:21:06.497 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:21:06.497 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:06.497 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 126], 99.95th=[ 157], 00:21:06.497 | 99.99th=[ 157] 00:21:06.497 bw ( KiB/s): min= 664, max= 1048, per=4.14%, avg=897.21, stdev=114.56, samples=19 00:21:06.497 iops : min= 166, max= 262, avg=224.26, stdev=28.64, samples=19 00:21:06.497 lat (msec) : 4=1.61%, 10=1.61%, 20=0.72%, 50=19.90%, 100=68.71% 00:21:06.497 lat (msec) : 250=7.45% 00:21:06.497 cpu : usr=33.82%, sys=2.05%, ctx=953, majf=0, minf=9 00:21:06.497 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:06.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.497 filename0: (groupid=0, jobs=1): err= 0: pid=83573: Mon Jul 15 19:11:31 2024 00:21:06.497 read: IOPS=237, BW=951KiB/s (974kB/s)(9516KiB/10004msec) 00:21:06.497 slat (usec): min=3, max=8046, avg=29.55, stdev=294.61 00:21:06.497 clat (msec): min=3, max=156, avg=67.15, stdev=21.07 00:21:06.497 lat (msec): min=3, max=156, avg=67.18, stdev=21.07 00:21:06.497 clat percentiles (msec): 00:21:06.497 | 1.00th=[ 7], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:21:06.497 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:21:06.497 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 109], 00:21:06.497 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 157], 00:21:06.497 | 99.99th=[ 157] 00:21:06.497 bw ( KiB/s): min= 664, max= 1080, per=4.24%, avg=918.42, stdev=119.40, samples=19 00:21:06.497 iops : min= 166, max= 270, avg=229.58, stdev=29.83, samples=19 00:21:06.497 lat (msec) : 4=0.13%, 10=1.60%, 20=0.80%, 50=20.05%, 100=70.16% 00:21:06.497 lat (msec) : 250=7.27% 00:21:06.497 cpu : usr=39.71%, sys=2.59%, ctx=1624, majf=0, minf=9 00:21:06.497 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:06.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.497 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename0: (groupid=0, jobs=1): err= 0: pid=83574: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=224, BW=898KiB/s (920kB/s)(9008KiB/10027msec) 00:21:06.498 slat (usec): min=8, max=5023, avg=27.04, stdev=216.60 00:21:06.498 clat (msec): min=34, max=124, avg=71.05, stdev=18.65 00:21:06.498 lat (msec): min=34, max=125, avg=71.08, stdev=18.65 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:21:06.498 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:21:06.498 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:21:06.498 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 126], 99.95th=[ 126], 00:21:06.498 | 99.99th=[ 126] 00:21:06.498 bw ( KiB/s): min= 648, max= 1010, per=4.13%, avg=894.40, stdev=111.52, samples=20 00:21:06.498 iops : min= 162, max= 252, avg=223.55, stdev=27.83, samples=20 00:21:06.498 lat (msec) : 50=16.25%, 100=75.04%, 250=8.70% 00:21:06.498 cpu : usr=42.66%, sys=2.80%, ctx=1357, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename0: (groupid=0, jobs=1): err= 0: pid=83575: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=230, BW=922KiB/s (944kB/s)(9252KiB/10039msec) 00:21:06.498 slat (usec): min=4, max=8051, avg=35.99, stdev=401.31 00:21:06.498 clat (usec): min=1572, max=153869, avg=69236.77, stdev=22324.15 00:21:06.498 lat (usec): min=1582, max=153883, avg=69272.76, stdev=22328.90 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 4], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 52], 00:21:06.498 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.498 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 110], 00:21:06.498 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 146], 00:21:06.498 | 99.99th=[ 155] 00:21:06.498 bw ( KiB/s): min= 608, max= 1685, per=4.24%, avg=918.15, stdev=208.96, samples=20 00:21:06.498 iops : min= 152, max= 421, avg=229.50, stdev=52.20, samples=20 00:21:06.498 lat (msec) : 2=0.69%, 4=0.69%, 10=1.38%, 20=1.30%, 50=14.74% 00:21:06.498 lat (msec) : 100=72.55%, 250=8.65% 00:21:06.498 cpu : usr=39.52%, sys=2.11%, ctx=1165, majf=0, minf=0 00:21:06.498 IO depths : 1=0.2%, 2=0.7%, 4=2.2%, 8=80.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename0: (groupid=0, jobs=1): err= 0: pid=83576: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=220, BW=883KiB/s (904kB/s)(8860KiB/10032msec) 00:21:06.498 slat (usec): min=3, max=8036, avg=24.99, stdev=294.83 00:21:06.498 clat (msec): min=15, max=143, avg=72.29, stdev=20.22 00:21:06.498 lat (msec): min=15, max=143, avg=72.31, stdev=20.22 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:21:06.498 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.498 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 109], 00:21:06.498 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:21:06.498 | 99.99th=[ 144] 00:21:06.498 bw ( KiB/s): min= 560, max= 1150, per=4.07%, avg=881.05, stdev=134.88, samples=20 00:21:06.498 iops : min= 140, max= 287, avg=220.20, stdev=33.65, samples=20 00:21:06.498 lat (msec) : 20=1.35%, 50=14.99%, 100=72.78%, 250=10.88% 00:21:06.498 cpu : usr=31.33%, sys=1.85%, ctx=855, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename0: (groupid=0, jobs=1): err= 0: pid=83577: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=231, BW=925KiB/s (947kB/s)(9256KiB/10007msec) 00:21:06.498 slat (usec): min=4, max=8026, avg=27.27, stdev=257.85 00:21:06.498 clat (msec): min=9, max=127, avg=69.05, stdev=20.27 00:21:06.498 lat (msec): min=9, max=128, avg=69.07, stdev=20.27 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 17], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:21:06.498 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:21:06.498 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 112], 00:21:06.498 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 129], 99.95th=[ 129], 00:21:06.498 | 99.99th=[ 129] 00:21:06.498 bw ( KiB/s): min= 664, max= 1024, per=4.16%, avg=902.21, stdev=105.57, samples=19 00:21:06.498 iops : min= 166, max= 256, avg=225.53, stdev=26.37, samples=19 00:21:06.498 lat (msec) : 10=0.17%, 20=1.08%, 50=19.49%, 100=69.97%, 250=9.29% 00:21:06.498 cpu : usr=41.43%, sys=2.64%, ctx=1281, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename0: (groupid=0, jobs=1): err= 0: pid=83578: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=233, BW=933KiB/s (955kB/s)(9340KiB/10012msec) 00:21:06.498 slat (usec): min=4, max=8039, avg=33.26, stdev=351.65 00:21:06.498 clat (msec): min=10, max=131, avg=68.42, stdev=19.41 00:21:06.498 lat (msec): min=10, max=131, avg=68.46, stdev=19.41 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 49], 00:21:06.498 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:21:06.498 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 108], 00:21:06.498 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:21:06.498 | 99.99th=[ 131] 00:21:06.498 bw ( KiB/s): min= 664, max= 1048, per=4.22%, avg=915.79, stdev=110.80, samples=19 00:21:06.498 iops : min= 166, max= 262, avg=228.95, stdev=27.70, samples=19 00:21:06.498 lat (msec) : 20=0.99%, 50=20.86%, 100=70.28%, 250=7.88% 00:21:06.498 cpu : usr=38.20%, sys=1.99%, ctx=1094, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename1: (groupid=0, jobs=1): err= 0: pid=83579: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=233, BW=933KiB/s (956kB/s)(9336KiB/10005msec) 00:21:06.498 slat (usec): min=3, max=8029, avg=24.26, stdev=248.80 00:21:06.498 clat (msec): min=6, max=129, avg=68.46, stdev=20.87 00:21:06.498 lat (msec): min=6, max=129, avg=68.48, stdev=20.87 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:21:06.498 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:21:06.498 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:21:06.498 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 130], 00:21:06.498 | 99.99th=[ 130] 00:21:06.498 bw ( KiB/s): min= 664, max= 1048, per=4.19%, avg=907.79, stdev=116.23, samples=19 00:21:06.498 iops : min= 166, max= 262, avg=226.89, stdev=29.05, samples=19 00:21:06.498 lat (msec) : 10=0.86%, 20=0.94%, 50=24.29%, 100=65.30%, 250=8.61% 00:21:06.498 cpu : usr=32.33%, sys=1.85%, ctx=933, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename1: (groupid=0, jobs=1): err= 0: pid=83580: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=207, BW=831KiB/s (851kB/s)(8332KiB/10029msec) 00:21:06.498 slat (usec): min=8, max=5050, avg=20.84, stdev=175.13 00:21:06.498 clat (msec): min=32, max=151, avg=76.88, stdev=20.43 00:21:06.498 lat (msec): min=32, max=152, avg=76.90, stdev=20.42 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:21:06.498 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 78], 00:21:06.498 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 113], 00:21:06.498 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:21:06.498 | 99.99th=[ 153] 00:21:06.498 bw ( KiB/s): min= 528, max= 1024, per=3.81%, avg=826.70, stdev=141.63, samples=20 00:21:06.498 iops : min= 132, max= 256, avg=206.65, stdev=35.38, samples=20 00:21:06.498 lat (msec) : 50=9.51%, 100=76.52%, 250=13.97% 00:21:06.498 cpu : usr=42.56%, sys=2.56%, ctx=1316, majf=0, minf=10 00:21:06.498 IO depths : 1=0.1%, 2=3.0%, 4=12.3%, 8=70.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=90.7%, 8=6.5%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.498 filename1: (groupid=0, jobs=1): err= 0: pid=83581: Mon Jul 15 19:11:31 2024 00:21:06.498 read: IOPS=226, BW=906KiB/s (928kB/s)(9076KiB/10020msec) 00:21:06.498 slat (usec): min=4, max=8026, avg=23.19, stdev=211.07 00:21:06.498 clat (msec): min=27, max=123, avg=70.50, stdev=18.50 00:21:06.498 lat (msec): min=27, max=123, avg=70.53, stdev=18.49 00:21:06.498 clat percentiles (msec): 00:21:06.498 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 53], 00:21:06.498 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:21:06.498 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 109], 00:21:06.498 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:21:06.498 | 99.99th=[ 124] 00:21:06.498 bw ( KiB/s): min= 664, max= 1056, per=4.17%, avg=903.65, stdev=105.06, samples=20 00:21:06.498 iops : min= 166, max= 264, avg=225.85, stdev=26.24, samples=20 00:21:06.498 lat (msec) : 50=17.19%, 100=74.48%, 250=8.33% 00:21:06.498 cpu : usr=39.63%, sys=2.48%, ctx=1266, majf=0, minf=9 00:21:06.498 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:06.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.498 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename1: (groupid=0, jobs=1): err= 0: pid=83582: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=236, BW=946KiB/s (969kB/s)(9468KiB/10004msec) 00:21:06.499 slat (usec): min=5, max=8032, avg=24.11, stdev=247.22 00:21:06.499 clat (msec): min=5, max=157, avg=67.50, stdev=21.14 00:21:06.499 lat (msec): min=5, max=157, avg=67.52, stdev=21.13 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 9], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:21:06.499 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 108], 00:21:06.499 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 128], 99.95th=[ 157], 00:21:06.499 | 99.99th=[ 157] 00:21:06.499 bw ( KiB/s): min= 664, max= 1048, per=4.22%, avg=915.63, stdev=118.10, samples=19 00:21:06.499 iops : min= 166, max= 262, avg=228.89, stdev=29.51, samples=19 00:21:06.499 lat (msec) : 10=1.65%, 20=0.93%, 50=21.84%, 100=68.02%, 250=7.56% 00:21:06.499 cpu : usr=33.64%, sys=1.99%, ctx=972, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename1: (groupid=0, jobs=1): err= 0: pid=83583: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=218, BW=875KiB/s (896kB/s)(8772KiB/10025msec) 00:21:06.499 slat (usec): min=3, max=8028, avg=21.52, stdev=241.99 00:21:06.499 clat (msec): min=14, max=134, avg=73.00, stdev=19.59 00:21:06.499 lat (msec): min=14, max=134, avg=73.03, stdev=19.59 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:21:06.499 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 109], 00:21:06.499 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 134], 00:21:06.499 | 99.99th=[ 136] 00:21:06.499 bw ( KiB/s): min= 616, max= 1136, per=4.02%, avg=870.70, stdev=126.84, samples=20 00:21:06.499 iops : min= 154, max= 284, avg=217.65, stdev=31.69, samples=20 00:21:06.499 lat (msec) : 20=0.73%, 50=15.82%, 100=73.10%, 250=10.35% 00:21:06.499 cpu : usr=31.39%, sys=1.76%, ctx=844, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=78.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename1: (groupid=0, jobs=1): err= 0: pid=83584: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=220, BW=880KiB/s (901kB/s)(8816KiB/10014msec) 00:21:06.499 slat (usec): min=5, max=8028, avg=37.15, stdev=327.21 00:21:06.499 clat (msec): min=36, max=142, avg=72.44, stdev=19.05 00:21:06.499 lat (msec): min=36, max=142, avg=72.48, stdev=19.04 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:21:06.499 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:21:06.499 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 112], 00:21:06.499 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 138], 00:21:06.499 | 99.99th=[ 144] 00:21:06.499 bw ( KiB/s): min= 640, max= 1048, per=4.05%, avg=877.90, stdev=111.29, samples=20 00:21:06.499 iops : min= 160, max= 262, avg=219.45, stdev=27.81, samples=20 00:21:06.499 lat (msec) : 50=15.11%, 100=74.09%, 250=10.80% 00:21:06.499 cpu : usr=40.77%, sys=2.70%, ctx=1477, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=77.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=88.8%, 8=9.9%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename1: (groupid=0, jobs=1): err= 0: pid=83585: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=236, BW=946KiB/s (969kB/s)(9464KiB/10001msec) 00:21:06.499 slat (usec): min=4, max=8025, avg=23.40, stdev=236.32 00:21:06.499 clat (msec): min=2, max=143, avg=67.52, stdev=23.00 00:21:06.499 lat (msec): min=2, max=143, avg=67.55, stdev=22.99 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 4], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 48], 00:21:06.499 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:21:06.499 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 144], 00:21:06.499 | 99.99th=[ 144] 00:21:06.499 bw ( KiB/s): min= 640, max= 1056, per=4.15%, avg=900.63, stdev=112.08, samples=19 00:21:06.499 iops : min= 160, max= 264, avg=225.11, stdev=28.01, samples=19 00:21:06.499 lat (msec) : 4=1.61%, 10=1.35%, 20=0.93%, 50=21.09%, 100=65.93% 00:21:06.499 lat (msec) : 250=9.09% 00:21:06.499 cpu : usr=32.49%, sys=2.02%, ctx=919, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename1: (groupid=0, jobs=1): err= 0: pid=83586: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=225, BW=904KiB/s (926kB/s)(9076KiB/10041msec) 00:21:06.499 slat (usec): min=5, max=8034, avg=19.11, stdev=182.97 00:21:06.499 clat (msec): min=15, max=146, avg=70.64, stdev=19.43 00:21:06.499 lat (msec): min=16, max=146, avg=70.66, stdev=19.44 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 18], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:21:06.499 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 109], 00:21:06.499 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:21:06.499 | 99.99th=[ 146] 00:21:06.499 bw ( KiB/s): min= 616, max= 1266, per=4.16%, avg=902.90, stdev=135.53, samples=20 00:21:06.499 iops : min= 154, max= 316, avg=225.65, stdev=33.81, samples=20 00:21:06.499 lat (msec) : 20=1.32%, 50=15.51%, 100=74.79%, 250=8.37% 00:21:06.499 cpu : usr=35.13%, sys=2.50%, ctx=1044, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename2: (groupid=0, jobs=1): err= 0: pid=83587: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=225, BW=902KiB/s (923kB/s)(9036KiB/10020msec) 00:21:06.499 slat (usec): min=5, max=8030, avg=20.52, stdev=188.70 00:21:06.499 clat (msec): min=26, max=138, avg=70.81, stdev=19.08 00:21:06.499 lat (msec): min=26, max=138, avg=70.83, stdev=19.08 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:21:06.499 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 109], 00:21:06.499 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:21:06.499 | 99.99th=[ 138] 00:21:06.499 bw ( KiB/s): min= 640, max= 1122, per=4.15%, avg=900.00, stdev=121.63, samples=20 00:21:06.499 iops : min= 160, max= 280, avg=224.95, stdev=30.35, samples=20 00:21:06.499 lat (msec) : 50=18.55%, 100=71.71%, 250=9.74% 00:21:06.499 cpu : usr=36.93%, sys=2.44%, ctx=1077, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename2: (groupid=0, jobs=1): err= 0: pid=83588: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=230, BW=923KiB/s (945kB/s)(9248KiB/10016msec) 00:21:06.499 slat (usec): min=3, max=8028, avg=22.46, stdev=235.61 00:21:06.499 clat (msec): min=21, max=150, avg=69.19, stdev=19.57 00:21:06.499 lat (msec): min=21, max=150, avg=69.21, stdev=19.58 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 48], 00:21:06.499 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:21:06.499 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 150], 00:21:06.499 | 99.99th=[ 150] 00:21:06.499 bw ( KiB/s): min= 664, max= 1096, per=4.25%, avg=920.70, stdev=113.17, samples=20 00:21:06.499 iops : min= 166, max= 274, avg=230.15, stdev=28.28, samples=20 00:21:06.499 lat (msec) : 50=23.05%, 100=69.25%, 250=7.70% 00:21:06.499 cpu : usr=34.46%, sys=2.22%, ctx=956, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=81.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename2: (groupid=0, jobs=1): err= 0: pid=83589: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=228, BW=913KiB/s (934kB/s)(9144KiB/10020msec) 00:21:06.499 slat (usec): min=5, max=8033, avg=27.60, stdev=246.65 00:21:06.499 clat (msec): min=23, max=125, avg=69.96, stdev=18.86 00:21:06.499 lat (msec): min=23, max=125, avg=69.99, stdev=18.85 00:21:06.499 clat percentiles (msec): 00:21:06.499 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:21:06.499 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.499 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:21:06.499 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:21:06.499 | 99.99th=[ 126] 00:21:06.499 bw ( KiB/s): min= 664, max= 1048, per=4.20%, avg=910.75, stdev=107.32, samples=20 00:21:06.499 iops : min= 166, max= 262, avg=227.65, stdev=26.82, samples=20 00:21:06.499 lat (msec) : 50=21.43%, 100=70.34%, 250=8.22% 00:21:06.499 cpu : usr=41.67%, sys=2.73%, ctx=1085, majf=0, minf=9 00:21:06.499 IO depths : 1=0.1%, 2=0.3%, 4=1.6%, 8=82.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:06.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.499 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.499 filename2: (groupid=0, jobs=1): err= 0: pid=83590: Mon Jul 15 19:11:31 2024 00:21:06.499 read: IOPS=229, BW=918KiB/s (940kB/s)(9192KiB/10009msec) 00:21:06.499 slat (usec): min=3, max=8049, avg=32.40, stdev=373.62 00:21:06.499 clat (msec): min=10, max=123, avg=69.49, stdev=19.37 00:21:06.500 lat (msec): min=10, max=123, avg=69.53, stdev=19.37 00:21:06.500 clat percentiles (msec): 00:21:06.500 | 1.00th=[ 22], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:06.500 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:21:06.500 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:06.500 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:21:06.500 | 99.99th=[ 124] 00:21:06.500 bw ( KiB/s): min= 664, max= 1048, per=4.17%, avg=904.42, stdev=102.36, samples=19 00:21:06.500 iops : min= 166, max= 262, avg=226.11, stdev=25.59, samples=19 00:21:06.500 lat (msec) : 20=0.96%, 50=20.41%, 100=70.97%, 250=7.66% 00:21:06.500 cpu : usr=31.24%, sys=2.04%, ctx=851, majf=0, minf=9 00:21:06.500 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.500 filename2: (groupid=0, jobs=1): err= 0: pid=83591: Mon Jul 15 19:11:31 2024 00:21:06.500 read: IOPS=224, BW=899KiB/s (920kB/s)(9016KiB/10033msec) 00:21:06.500 slat (nsec): min=3870, max=38723, avg=14571.83, stdev=5282.72 00:21:06.500 clat (msec): min=5, max=124, avg=71.07, stdev=20.15 00:21:06.500 lat (msec): min=5, max=124, avg=71.08, stdev=20.15 00:21:06.500 clat percentiles (msec): 00:21:06.500 | 1.00th=[ 9], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:21:06.500 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:06.500 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 109], 00:21:06.500 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 125], 00:21:06.500 | 99.99th=[ 125] 00:21:06.500 bw ( KiB/s): min= 632, max= 1285, per=4.14%, avg=896.60, stdev=135.76, samples=20 00:21:06.500 iops : min= 158, max= 321, avg=224.10, stdev=33.89, samples=20 00:21:06.500 lat (msec) : 10=1.42%, 20=0.71%, 50=13.71%, 100=74.67%, 250=9.49% 00:21:06.500 cpu : usr=38.47%, sys=2.23%, ctx=1238, majf=0, minf=9 00:21:06.500 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.500 filename2: (groupid=0, jobs=1): err= 0: pid=83592: Mon Jul 15 19:11:31 2024 00:21:06.500 read: IOPS=212, BW=852KiB/s (872kB/s)(8536KiB/10022msec) 00:21:06.500 slat (usec): min=4, max=7032, avg=18.20, stdev=151.99 00:21:06.500 clat (msec): min=36, max=145, avg=75.00, stdev=18.52 00:21:06.500 lat (msec): min=36, max=145, avg=75.02, stdev=18.52 00:21:06.500 clat percentiles (msec): 00:21:06.500 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:21:06.500 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:21:06.500 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 112], 00:21:06.500 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 132], 00:21:06.500 | 99.99th=[ 146] 00:21:06.500 bw ( KiB/s): min= 656, max= 1008, per=3.92%, avg=849.60, stdev=122.39, samples=20 00:21:06.500 iops : min= 164, max= 252, avg=212.35, stdev=30.58, samples=20 00:21:06.500 lat (msec) : 50=11.76%, 100=77.04%, 250=11.20% 00:21:06.500 cpu : usr=30.88%, sys=2.25%, ctx=931, majf=0, minf=9 00:21:06.500 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 complete : 0=0.0%, 4=89.4%, 8=8.9%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.500 filename2: (groupid=0, jobs=1): err= 0: pid=83593: Mon Jul 15 19:11:31 2024 00:21:06.500 read: IOPS=213, BW=854KiB/s (874kB/s)(8568KiB/10038msec) 00:21:06.500 slat (usec): min=5, max=8023, avg=25.59, stdev=236.09 00:21:06.500 clat (usec): min=1740, max=144894, avg=74716.08, stdev=21798.28 00:21:06.500 lat (usec): min=1748, max=144903, avg=74741.67, stdev=21802.86 00:21:06.500 clat percentiles (msec): 00:21:06.500 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 64], 00:21:06.500 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:21:06.500 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 112], 00:21:06.500 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:21:06.500 | 99.99th=[ 146] 00:21:06.500 bw ( KiB/s): min= 592, max= 1410, per=3.92%, avg=850.40, stdev=160.35, samples=20 00:21:06.500 iops : min= 148, max= 352, avg=212.55, stdev=40.00, samples=20 00:21:06.500 lat (msec) : 2=0.09%, 10=2.05%, 20=0.84%, 50=7.66%, 100=76.38% 00:21:06.500 lat (msec) : 250=12.98% 00:21:06.500 cpu : usr=40.39%, sys=2.30%, ctx=1703, majf=0, minf=9 00:21:06.500 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=75.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 complete : 0=0.0%, 4=89.7%, 8=8.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 issued rwts: total=2142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.500 filename2: (groupid=0, jobs=1): err= 0: pid=83594: Mon Jul 15 19:11:31 2024 00:21:06.500 read: IOPS=219, BW=880KiB/s (901kB/s)(8812KiB/10019msec) 00:21:06.500 slat (usec): min=7, max=12030, avg=20.60, stdev=256.03 00:21:06.500 clat (msec): min=20, max=131, avg=72.59, stdev=19.31 00:21:06.500 lat (msec): min=20, max=131, avg=72.61, stdev=19.32 00:21:06.500 clat percentiles (msec): 00:21:06.500 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:21:06.500 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:21:06.500 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 111], 00:21:06.500 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:21:06.500 | 99.99th=[ 132] 00:21:06.500 bw ( KiB/s): min= 664, max= 1056, per=4.03%, avg=874.70, stdev=109.97, samples=20 00:21:06.500 iops : min= 166, max= 264, avg=218.65, stdev=27.47, samples=20 00:21:06.500 lat (msec) : 50=17.11%, 100=72.27%, 250=10.62% 00:21:06.500 cpu : usr=35.66%, sys=1.90%, ctx=1182, majf=0, minf=9 00:21:06.500 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:06.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 complete : 0=0.0%, 4=88.5%, 8=10.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.500 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:06.500 00:21:06.500 Run status group 0 (all jobs): 00:21:06.500 READ: bw=21.2MiB/s (22.2MB/s), 831KiB/s-951KiB/s (851kB/s-974kB/s), io=212MiB (223MB), run=10001-10041msec 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.500 bdev_null0 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.500 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 [2024-07-15 19:11:31.957845] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 bdev_null1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.501 { 00:21:06.501 "params": { 00:21:06.501 "name": "Nvme$subsystem", 00:21:06.501 "trtype": "$TEST_TRANSPORT", 00:21:06.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.501 "adrfam": "ipv4", 00:21:06.501 "trsvcid": "$NVMF_PORT", 00:21:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.501 "hdgst": ${hdgst:-false}, 00:21:06.501 "ddgst": ${ddgst:-false} 00:21:06.501 }, 00:21:06.501 "method": "bdev_nvme_attach_controller" 00:21:06.501 } 00:21:06.501 EOF 00:21:06.501 )") 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:06.501 19:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.501 { 00:21:06.501 "params": { 00:21:06.501 "name": "Nvme$subsystem", 00:21:06.501 "trtype": "$TEST_TRANSPORT", 00:21:06.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.501 "adrfam": "ipv4", 00:21:06.501 "trsvcid": "$NVMF_PORT", 00:21:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.501 "hdgst": ${hdgst:-false}, 00:21:06.501 "ddgst": ${ddgst:-false} 00:21:06.501 }, 00:21:06.501 "method": "bdev_nvme_attach_controller" 00:21:06.501 } 00:21:06.501 EOF 00:21:06.501 )") 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:06.501 "params": { 00:21:06.501 "name": "Nvme0", 00:21:06.501 "trtype": "tcp", 00:21:06.501 "traddr": "10.0.0.2", 00:21:06.501 "adrfam": "ipv4", 00:21:06.501 "trsvcid": "4420", 00:21:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:06.501 "hdgst": false, 00:21:06.501 "ddgst": false 00:21:06.501 }, 00:21:06.501 "method": "bdev_nvme_attach_controller" 00:21:06.501 },{ 00:21:06.501 "params": { 00:21:06.501 "name": "Nvme1", 00:21:06.501 "trtype": "tcp", 00:21:06.501 "traddr": "10.0.0.2", 00:21:06.501 "adrfam": "ipv4", 00:21:06.501 "trsvcid": "4420", 00:21:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.501 "hdgst": false, 00:21:06.501 "ddgst": false 00:21:06.501 }, 00:21:06.501 "method": "bdev_nvme_attach_controller" 00:21:06.501 }' 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:06.501 19:11:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:06.501 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:06.501 ... 00:21:06.501 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:06.501 ... 00:21:06.501 fio-3.35 00:21:06.501 Starting 4 threads 00:21:10.687 00:21:10.687 filename0: (groupid=0, jobs=1): err= 0: pid=83728: Mon Jul 15 19:11:37 2024 00:21:10.687 read: IOPS=2088, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5003msec) 00:21:10.687 slat (nsec): min=6916, max=50294, avg=11251.59, stdev=3827.85 00:21:10.687 clat (usec): min=679, max=8545, avg=3790.04, stdev=813.91 00:21:10.687 lat (usec): min=688, max=8577, avg=3801.29, stdev=813.82 00:21:10.687 clat percentiles (usec): 00:21:10.687 | 1.00th=[ 1401], 5.00th=[ 1434], 10.00th=[ 2835], 20.00th=[ 3294], 00:21:10.687 | 30.00th=[ 3851], 40.00th=[ 4015], 50.00th=[ 4146], 60.00th=[ 4178], 00:21:10.687 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4686], 00:21:10.687 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 6063], 99.95th=[ 8455], 00:21:10.687 | 99.99th=[ 8586] 00:21:10.687 bw ( KiB/s): min=15104, max=18880, per=24.52%, avg=16268.44, stdev=1298.31, samples=9 00:21:10.687 iops : min= 1888, max= 2360, avg=2033.56, stdev=162.29, samples=9 00:21:10.687 lat (usec) : 750=0.02%, 1000=0.03% 00:21:10.687 lat (msec) : 2=6.59%, 4=32.60%, 10=60.76% 00:21:10.687 cpu : usr=91.46%, sys=7.46%, ctx=12, majf=0, minf=9 00:21:10.687 IO depths : 1=0.1%, 2=15.0%, 4=56.4%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.687 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.687 issued rwts: total=10451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:10.687 filename0: (groupid=0, jobs=1): err= 0: pid=83729: Mon Jul 15 19:11:37 2024 00:21:10.687 read: IOPS=1913, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5001msec) 00:21:10.687 slat (nsec): min=7553, max=42790, avg=14774.19, stdev=3896.98 00:21:10.687 clat (usec): min=1275, max=6205, avg=4123.96, stdev=438.40 00:21:10.687 lat (usec): min=1290, max=6230, avg=4138.73, stdev=438.04 00:21:10.687 clat percentiles (usec): 00:21:10.687 | 1.00th=[ 2900], 5.00th=[ 3294], 10.00th=[ 3785], 20.00th=[ 3884], 00:21:10.687 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178], 00:21:10.687 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4883], 00:21:10.687 | 99.00th=[ 5473], 99.50th=[ 5932], 99.90th=[ 6128], 99.95th=[ 6194], 00:21:10.687 | 99.99th=[ 6194] 00:21:10.687 bw ( KiB/s): min=14928, max=16896, per=23.09%, avg=15318.89, stdev=598.25, samples=9 00:21:10.687 iops : min= 1866, max= 2112, avg=1914.78, stdev=74.81, samples=9 00:21:10.687 lat (msec) : 2=0.27%, 4=21.85%, 10=77.88% 00:21:10.687 cpu : usr=91.92%, sys=7.34%, ctx=10, majf=0, minf=10 00:21:10.687 IO depths : 1=0.1%, 2=22.1%, 4=52.4%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.687 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.687 issued rwts: total=9567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.687 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:10.687 filename1: (groupid=0, jobs=1): err= 0: pid=83730: Mon Jul 15 19:11:37 2024 00:21:10.687 read: IOPS=1981, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5002msec) 00:21:10.687 slat (nsec): min=7770, max=48432, avg=14390.09, stdev=3868.56 00:21:10.687 clat (usec): min=1155, max=7463, avg=3986.84, stdev=533.64 00:21:10.688 lat (usec): min=1163, max=7491, avg=4001.23, stdev=533.71 00:21:10.688 clat percentiles (usec): 00:21:10.688 | 1.00th=[ 1975], 5.00th=[ 2900], 10.00th=[ 3294], 20.00th=[ 3818], 00:21:10.688 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178], 00:21:10.688 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4228], 95.00th=[ 4555], 00:21:10.688 | 99.00th=[ 5080], 99.50th=[ 5080], 99.90th=[ 6980], 99.95th=[ 7373], 00:21:10.688 | 99.99th=[ 7439] 00:21:10.688 bw ( KiB/s): min=15104, max=17264, per=24.01%, avg=15930.78, stdev=972.68, samples=9 00:21:10.688 iops : min= 1888, max= 2158, avg=1991.33, stdev=121.57, samples=9 00:21:10.688 lat (msec) : 2=1.08%, 4=27.26%, 10=71.66% 00:21:10.688 cpu : usr=91.98%, sys=7.24%, ctx=11, majf=0, minf=9 00:21:10.688 IO depths : 1=0.1%, 2=19.5%, 4=54.1%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.688 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.688 issued rwts: total=9911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.688 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:10.688 filename1: (groupid=0, jobs=1): err= 0: pid=83731: Mon Jul 15 19:11:37 2024 00:21:10.688 read: IOPS=2311, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5002msec) 00:21:10.688 slat (usec): min=7, max=122, avg=14.67, stdev= 4.68 00:21:10.688 clat (usec): min=1089, max=7052, avg=3418.62, stdev=978.78 00:21:10.688 lat (usec): min=1098, max=7060, avg=3433.29, stdev=978.80 00:21:10.688 clat percentiles (usec): 00:21:10.688 | 1.00th=[ 1434], 5.00th=[ 1467], 10.00th=[ 1598], 20.00th=[ 2769], 00:21:10.688 | 30.00th=[ 2900], 40.00th=[ 3392], 50.00th=[ 3851], 60.00th=[ 4015], 00:21:10.688 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4621], 00:21:10.688 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5407], 00:21:10.688 | 99.99th=[ 6849] 00:21:10.688 bw ( KiB/s): min=15232, max=20880, per=28.43%, avg=18860.44, stdev=2177.73, samples=9 00:21:10.688 iops : min= 1904, max= 2610, avg=2357.56, stdev=272.22, samples=9 00:21:10.688 lat (msec) : 2=16.03%, 4=43.53%, 10=40.44% 00:21:10.688 cpu : usr=91.14%, sys=7.66%, ctx=9, majf=0, minf=9 00:21:10.688 IO depths : 1=0.1%, 2=7.0%, 4=60.6%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:10.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.688 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.688 issued rwts: total=11561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.688 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:10.688 00:21:10.688 Run status group 0 (all jobs): 00:21:10.688 READ: bw=64.8MiB/s (67.9MB/s), 14.9MiB/s-18.1MiB/s (15.7MB/s-18.9MB/s), io=324MiB (340MB), run=5001-5003msec 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 ************************************ 00:21:10.947 END TEST fio_dif_rand_params 00:21:10.947 ************************************ 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.947 00:21:10.947 real 0m23.529s 00:21:10.947 user 2m2.744s 00:21:10.947 sys 0m8.975s 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 19:11:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:10.947 19:11:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:10.947 19:11:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:10.947 19:11:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 ************************************ 00:21:10.947 START TEST fio_dif_digest 00:21:10.947 ************************************ 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.947 bdev_null0 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:10.947 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.948 [2024-07-15 19:11:38.133991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.948 { 00:21:10.948 "params": { 00:21:10.948 "name": "Nvme$subsystem", 00:21:10.948 "trtype": "$TEST_TRANSPORT", 00:21:10.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.948 "adrfam": "ipv4", 00:21:10.948 "trsvcid": "$NVMF_PORT", 00:21:10.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.948 "hdgst": ${hdgst:-false}, 00:21:10.948 "ddgst": ${ddgst:-false} 00:21:10.948 }, 00:21:10.948 "method": "bdev_nvme_attach_controller" 00:21:10.948 } 00:21:10.948 EOF 00:21:10.948 )") 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.948 "params": { 00:21:10.948 "name": "Nvme0", 00:21:10.948 "trtype": "tcp", 00:21:10.948 "traddr": "10.0.0.2", 00:21:10.948 "adrfam": "ipv4", 00:21:10.948 "trsvcid": "4420", 00:21:10.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.948 "hdgst": true, 00:21:10.948 "ddgst": true 00:21:10.948 }, 00:21:10.948 "method": "bdev_nvme_attach_controller" 00:21:10.948 }' 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.948 19:11:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:11.207 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:11.207 ... 00:21:11.207 fio-3.35 00:21:11.207 Starting 3 threads 00:21:23.405 00:21:23.405 filename0: (groupid=0, jobs=1): err= 0: pid=83838: Mon Jul 15 19:11:48 2024 00:21:23.405 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(285MiB/10005msec) 00:21:23.405 slat (nsec): min=7736, max=43254, avg=10722.85, stdev=3740.52 00:21:23.405 clat (usec): min=8195, max=13663, avg=13120.02, stdev=202.77 00:21:23.405 lat (usec): min=8203, max=13679, avg=13130.74, stdev=202.84 00:21:23.405 clat percentiles (usec): 00:21:23.405 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:23.405 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:21:23.405 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:23.405 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:21:23.405 | 99.99th=[13698] 00:21:23.405 bw ( KiB/s): min=28416, max=29952, per=33.37%, avg=29224.42, stdev=310.77, samples=19 00:21:23.405 iops : min= 222, max= 234, avg=228.32, stdev= 2.43, samples=19 00:21:23.405 lat (msec) : 10=0.13%, 20=99.87% 00:21:23.405 cpu : usr=90.16%, sys=9.28%, ctx=16, majf=0, minf=0 00:21:23.405 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.405 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.405 filename0: (groupid=0, jobs=1): err= 0: pid=83839: Mon Jul 15 19:11:48 2024 00:21:23.405 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10002msec) 00:21:23.405 slat (nsec): min=7898, max=55640, avg=11257.30, stdev=4468.75 00:21:23.405 clat (usec): min=11770, max=19562, avg=13132.80, stdev=257.02 00:21:23.405 lat (usec): min=11778, max=19590, avg=13144.06, stdev=257.68 00:21:23.405 clat percentiles (usec): 00:21:23.405 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:23.405 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], 00:21:23.405 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:23.405 | 99.00th=[13566], 99.50th=[13566], 99.90th=[19530], 99.95th=[19530], 00:21:23.405 | 99.99th=[19530] 00:21:23.405 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29184.00, stdev=362.04, samples=19 00:21:23.405 iops : min= 222, max= 234, avg=228.00, stdev= 2.83, samples=19 00:21:23.405 lat (msec) : 20=100.00% 00:21:23.405 cpu : usr=90.50%, sys=8.92%, ctx=12, majf=0, minf=9 00:21:23.405 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.405 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.405 filename0: (groupid=0, jobs=1): err= 0: pid=83840: Mon Jul 15 19:11:48 2024 00:21:23.405 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(285MiB/10006msec) 00:21:23.405 slat (nsec): min=8118, max=52358, avg=11224.08, stdev=4059.74 00:21:23.405 clat (usec): min=8404, max=13999, avg=13119.70, stdev=197.19 00:21:23.405 lat (usec): min=8413, max=14012, avg=13130.92, stdev=197.10 00:21:23.405 clat percentiles (usec): 00:21:23.405 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:23.405 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], 00:21:23.405 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13304], 00:21:23.406 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13960], 99.95th=[13960], 00:21:23.406 | 99.99th=[13960] 00:21:23.406 bw ( KiB/s): min=29184, max=29952, per=33.37%, avg=29224.42, stdev=176.19, samples=19 00:21:23.406 iops : min= 228, max= 234, avg=228.32, stdev= 1.38, samples=19 00:21:23.406 lat (msec) : 10=0.13%, 20=99.87% 00:21:23.406 cpu : usr=90.53%, sys=8.90%, ctx=34, majf=0, minf=0 00:21:23.406 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.406 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.406 00:21:23.406 Run status group 0 (all jobs): 00:21:23.406 READ: bw=85.5MiB/s (89.7MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=856MiB (897MB), run=10002-10006msec 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 ************************************ 00:21:23.406 END TEST fio_dif_digest 00:21:23.406 ************************************ 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.406 00:21:23.406 real 0m10.976s 00:21:23.406 user 0m27.745s 00:21:23.406 sys 0m2.973s 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.406 19:11:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:23.406 19:11:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:23.406 19:11:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.406 rmmod nvme_tcp 00:21:23.406 rmmod nvme_fabrics 00:21:23.406 rmmod nvme_keyring 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83089 ']' 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83089 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83089 ']' 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83089 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83089 00:21:23.406 killing process with pid 83089 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83089' 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83089 00:21:23.406 19:11:49 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83089 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:23.406 19:11:49 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:23.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.406 Waiting for block devices as requested 00:21:23.406 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:23.406 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.406 19:11:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:23.406 19:11:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.406 19:11:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:23.406 ************************************ 00:21:23.406 END TEST nvmf_dif 00:21:23.406 ************************************ 00:21:23.406 00:21:23.406 real 0m59.713s 00:21:23.406 user 3m46.864s 00:21:23.406 sys 0m20.468s 00:21:23.406 19:11:50 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.406 19:11:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 19:11:50 -- common/autotest_common.sh@1142 -- # return 0 00:21:23.406 19:11:50 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:23.406 19:11:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:23.406 19:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.406 19:11:50 -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 ************************************ 00:21:23.406 START TEST nvmf_abort_qd_sizes 00:21:23.406 ************************************ 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:23.406 * Looking for test storage... 00:21:23.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:23.406 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:23.407 Cannot find device "nvmf_tgt_br" 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.407 Cannot find device "nvmf_tgt_br2" 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:23.407 Cannot find device "nvmf_tgt_br" 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:23.407 Cannot find device "nvmf_tgt_br2" 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:23.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:23.407 00:21:23.407 --- 10.0.0.2 ping statistics --- 00:21:23.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.407 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:23.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:23.407 00:21:23.407 --- 10.0.0.3 ping statistics --- 00:21:23.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.407 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:23.407 00:21:23.407 --- 10.0.0.1 ping statistics --- 00:21:23.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.407 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:23.407 19:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:23.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.231 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84429 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84429 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84429 ']' 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.231 19:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:24.231 [2024-07-15 19:11:51.454547] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:24.231 [2024-07-15 19:11:51.454895] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.491 [2024-07-15 19:11:51.596897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.491 [2024-07-15 19:11:51.736616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.491 [2024-07-15 19:11:51.736919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.491 [2024-07-15 19:11:51.737021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.491 [2024-07-15 19:11:51.737134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.491 [2024-07-15 19:11:51.737223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.491 [2024-07-15 19:11:51.737477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.491 [2024-07-15 19:11:51.737682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.491 [2024-07-15 19:11:51.738429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.491 [2024-07-15 19:11:51.738449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.750 [2024-07-15 19:11:51.796263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.318 19:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:25.318 ************************************ 00:21:25.318 START TEST spdk_target_abort 00:21:25.318 ************************************ 00:21:25.318 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:25.318 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:25.318 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:25.318 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.318 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 spdk_targetn1 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 [2024-07-15 19:11:52.674986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 [2024-07-15 19:11:52.703122] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:25.577 19:11:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:28.866 Initializing NVMe Controllers 00:21:28.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:28.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:28.866 Initialization complete. Launching workers. 00:21:28.866 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11213, failed: 0 00:21:28.866 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1018, failed to submit 10195 00:21:28.866 success 836, unsuccess 182, failed 0 00:21:28.866 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:28.866 19:11:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:32.186 Initializing NVMe Controllers 00:21:32.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:32.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:32.187 Initialization complete. Launching workers. 00:21:32.187 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8901, failed: 0 00:21:32.187 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1154, failed to submit 7747 00:21:32.187 success 402, unsuccess 752, failed 0 00:21:32.187 19:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:32.187 19:11:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:35.470 Initializing NVMe Controllers 00:21:35.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:35.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:35.470 Initialization complete. Launching workers. 00:21:35.470 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32224, failed: 0 00:21:35.470 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2266, failed to submit 29958 00:21:35.470 success 464, unsuccess 1802, failed 0 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.470 19:12:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84429 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84429 ']' 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84429 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84429 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.036 killing process with pid 84429 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84429' 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84429 00:21:36.036 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84429 00:21:36.293 ************************************ 00:21:36.293 END TEST spdk_target_abort 00:21:36.293 ************************************ 00:21:36.293 00:21:36.293 real 0m10.795s 00:21:36.293 user 0m43.885s 00:21:36.293 sys 0m2.193s 00:21:36.293 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.293 19:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:36.293 19:12:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:36.293 19:12:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:36.293 19:12:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:36.293 19:12:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.294 19:12:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:36.294 ************************************ 00:21:36.294 START TEST kernel_target_abort 00:21:36.294 ************************************ 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:36.294 19:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:36.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:36.552 Waiting for block devices as requested 00:21:36.811 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:36.811 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:36.811 No valid GPT data, bailing 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:36.811 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:37.069 No valid GPT data, bailing 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:37.069 No valid GPT data, bailing 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:37.069 No valid GPT data, bailing 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:37.069 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff --hostid=1bdc3113-659b-4df6-a9cf-a9738596adff -a 10.0.0.1 -t tcp -s 4420 00:21:37.070 00:21:37.070 Discovery Log Number of Records 2, Generation counter 2 00:21:37.070 =====Discovery Log Entry 0====== 00:21:37.070 trtype: tcp 00:21:37.070 adrfam: ipv4 00:21:37.070 subtype: current discovery subsystem 00:21:37.070 treq: not specified, sq flow control disable supported 00:21:37.070 portid: 1 00:21:37.070 trsvcid: 4420 00:21:37.070 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:37.070 traddr: 10.0.0.1 00:21:37.070 eflags: none 00:21:37.070 sectype: none 00:21:37.070 =====Discovery Log Entry 1====== 00:21:37.070 trtype: tcp 00:21:37.070 adrfam: ipv4 00:21:37.070 subtype: nvme subsystem 00:21:37.070 treq: not specified, sq flow control disable supported 00:21:37.070 portid: 1 00:21:37.070 trsvcid: 4420 00:21:37.070 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:37.070 traddr: 10.0.0.1 00:21:37.070 eflags: none 00:21:37.070 sectype: none 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:37.070 19:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.358 Initializing NVMe Controllers 00:21:40.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:40.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:40.358 Initialization complete. Launching workers. 00:21:40.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34143, failed: 0 00:21:40.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34143, failed to submit 0 00:21:40.358 success 0, unsuccess 34143, failed 0 00:21:40.358 19:12:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:40.358 19:12:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:43.645 Initializing NVMe Controllers 00:21:43.645 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:43.645 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:43.645 Initialization complete. Launching workers. 00:21:43.645 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73561, failed: 0 00:21:43.645 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32397, failed to submit 41164 00:21:43.645 success 0, unsuccess 32397, failed 0 00:21:43.645 19:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:43.645 19:12:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:46.945 Initializing NVMe Controllers 00:21:46.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:46.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:46.945 Initialization complete. Launching workers. 00:21:46.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88789, failed: 0 00:21:46.945 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22182, failed to submit 66607 00:21:46.945 success 0, unsuccess 22182, failed 0 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:46.945 19:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:47.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.039 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.039 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:50.039 00:21:50.039 real 0m13.441s 00:21:50.039 user 0m6.384s 00:21:50.039 sys 0m4.525s 00:21:50.039 19:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.039 19:12:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.039 ************************************ 00:21:50.039 END TEST kernel_target_abort 00:21:50.039 ************************************ 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.039 rmmod nvme_tcp 00:21:50.039 rmmod nvme_fabrics 00:21:50.039 rmmod nvme_keyring 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84429 ']' 00:21:50.039 19:12:16 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84429 00:21:50.039 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84429 ']' 00:21:50.039 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84429 00:21:50.039 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84429) - No such process 00:21:50.039 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84429 is not found' 00:21:50.039 Process with pid 84429 is not found 00:21:50.039 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:50.039 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:50.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.039 Waiting for block devices as requested 00:21:50.299 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:50.299 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:50.299 ************************************ 00:21:50.299 END TEST nvmf_abort_qd_sizes 00:21:50.299 ************************************ 00:21:50.299 00:21:50.299 real 0m27.446s 00:21:50.299 user 0m51.481s 00:21:50.299 sys 0m8.026s 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.299 19:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:50.299 19:12:17 -- common/autotest_common.sh@1142 -- # return 0 00:21:50.299 19:12:17 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:50.299 19:12:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:50.299 19:12:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.299 19:12:17 -- common/autotest_common.sh@10 -- # set +x 00:21:50.569 ************************************ 00:21:50.569 START TEST keyring_file 00:21:50.569 ************************************ 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:50.569 * Looking for test storage... 00:21:50.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.569 19:12:17 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.569 19:12:17 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.569 19:12:17 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.569 19:12:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.569 19:12:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.569 19:12:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.569 19:12:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:50.569 19:12:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1te3tGmYL2 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1te3tGmYL2 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1te3tGmYL2 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1te3tGmYL2 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ePrI0OXqez 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:50.569 19:12:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ePrI0OXqez 00:21:50.569 19:12:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ePrI0OXqez 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ePrI0OXqez 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=85300 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:50.569 19:12:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85300 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85300 ']' 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.569 19:12:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:50.828 [2024-07-15 19:12:17.890365] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:50.828 [2024-07-15 19:12:17.890462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85300 ] 00:21:50.828 [2024-07-15 19:12:18.025735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.086 [2024-07-15 19:12:18.150975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.086 [2024-07-15 19:12:18.208569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:51.653 19:12:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.653 19:12:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:51.653 19:12:18 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:51.654 [2024-07-15 19:12:18.897359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.654 null0 00:21:51.654 [2024-07-15 19:12:18.929294] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.654 [2024-07-15 19:12:18.929524] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:51.654 [2024-07-15 19:12:18.937292] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.654 19:12:18 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:51.654 19:12:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:51.911 19:12:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.911 19:12:18 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:51.911 19:12:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.911 19:12:18 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:51.911 19:12:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:51.912 [2024-07-15 19:12:18.949287] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:51.912 request: 00:21:51.912 { 00:21:51.912 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.912 "secure_channel": false, 00:21:51.912 "listen_address": { 00:21:51.912 "trtype": "tcp", 00:21:51.912 "traddr": "127.0.0.1", 00:21:51.912 "trsvcid": "4420" 00:21:51.912 }, 00:21:51.912 "method": "nvmf_subsystem_add_listener", 00:21:51.912 "req_id": 1 00:21:51.912 } 00:21:51.912 Got JSON-RPC error response 00:21:51.912 response: 00:21:51.912 { 00:21:51.912 "code": -32602, 00:21:51.912 "message": "Invalid parameters" 00:21:51.912 } 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.912 19:12:18 keyring_file -- keyring/file.sh@46 -- # bperfpid=85317 00:21:51.912 19:12:18 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85317 /var/tmp/bperf.sock 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85317 ']' 00:21:51.912 19:12:18 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:51.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.912 19:12:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:51.912 [2024-07-15 19:12:19.013393] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:21:51.912 [2024-07-15 19:12:19.013481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85317 ] 00:21:51.912 [2024-07-15 19:12:19.151124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.170 [2024-07-15 19:12:19.295261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.170 [2024-07-15 19:12:19.348250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:52.737 19:12:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.737 19:12:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:52.737 19:12:19 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:52.737 19:12:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:52.995 19:12:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ePrI0OXqez 00:21:52.995 19:12:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ePrI0OXqez 00:21:53.254 19:12:20 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:53.254 19:12:20 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:53.254 19:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.254 19:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.254 19:12:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.512 19:12:20 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.1te3tGmYL2 == \/\t\m\p\/\t\m\p\.\1\t\e\3\t\G\m\Y\L\2 ]] 00:21:53.512 19:12:20 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:53.512 19:12:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:53.512 19:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.512 19:12:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.512 19:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:53.770 19:12:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ePrI0OXqez == \/\t\m\p\/\t\m\p\.\e\P\r\I\0\O\X\q\e\z ]] 00:21:53.770 19:12:20 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:53.770 19:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.770 19:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.770 19:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.770 19:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.770 19:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.029 19:12:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:54.029 19:12:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:54.029 19:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:54.029 19:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.029 19:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.029 19:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.029 19:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:54.287 19:12:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:54.287 19:12:21 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.287 19:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.546 [2024-07-15 19:12:21.744567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.546 nvme0n1 00:21:54.804 19:12:21 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:54.804 19:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.804 19:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.804 19:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.804 19:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.804 19:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.062 19:12:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:55.062 19:12:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:55.062 19:12:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.062 19:12:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.062 19:12:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.062 19:12:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:55.062 19:12:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.321 19:12:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:55.321 19:12:22 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:55.321 Running I/O for 1 seconds... 00:21:56.695 00:21:56.695 Latency(us) 00:21:56.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.695 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:56.695 nvme0n1 : 1.01 11852.49 46.30 0.00 0.00 10760.46 3723.64 16324.42 00:21:56.695 =================================================================================================================== 00:21:56.695 Total : 11852.49 46.30 0.00 0.00 10760.46 3723.64 16324.42 00:21:56.695 0 00:21:56.695 19:12:23 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:56.695 19:12:23 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.695 19:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:56.952 19:12:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:56.952 19:12:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:56.952 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:56.952 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.952 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.952 19:12:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.952 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:57.238 19:12:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:57.238 19:12:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.238 19:12:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:57.238 19:12:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:57.495 [2024-07-15 19:12:24.704754] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.495 [2024-07-15 19:12:24.705649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156d710 (107): Transport endpoint is not connected 00:21:57.495 [2024-07-15 19:12:24.706636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156d710 (9): Bad file descriptor 00:21:57.495 [2024-07-15 19:12:24.707633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.495 [2024-07-15 19:12:24.707666] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:57.495 [2024-07-15 19:12:24.707677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.495 request: 00:21:57.495 { 00:21:57.495 "name": "nvme0", 00:21:57.495 "trtype": "tcp", 00:21:57.495 "traddr": "127.0.0.1", 00:21:57.495 "adrfam": "ipv4", 00:21:57.495 "trsvcid": "4420", 00:21:57.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.495 "prchk_reftag": false, 00:21:57.495 "prchk_guard": false, 00:21:57.495 "hdgst": false, 00:21:57.495 "ddgst": false, 00:21:57.495 "psk": "key1", 00:21:57.495 "method": "bdev_nvme_attach_controller", 00:21:57.495 "req_id": 1 00:21:57.495 } 00:21:57.495 Got JSON-RPC error response 00:21:57.495 response: 00:21:57.495 { 00:21:57.495 "code": -5, 00:21:57.495 "message": "Input/output error" 00:21:57.495 } 00:21:57.495 19:12:24 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:57.495 19:12:24 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.495 19:12:24 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.495 19:12:24 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.495 19:12:24 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:57.495 19:12:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:57.495 19:12:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:57.495 19:12:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:57.495 19:12:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:57.495 19:12:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:57.753 19:12:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:57.753 19:12:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:57.753 19:12:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:57.753 19:12:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:57.753 19:12:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:57.753 19:12:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:57.753 19:12:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:58.320 19:12:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:58.320 19:12:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:58.320 19:12:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:58.320 19:12:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:58.320 19:12:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:58.578 19:12:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:58.578 19:12:25 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:58.578 19:12:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.836 19:12:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:58.836 19:12:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.1te3tGmYL2 00:21:58.837 19:12:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.837 19:12:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:58.837 19:12:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:59.095 [2024-07-15 19:12:26.310022] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1te3tGmYL2': 0100660 00:21:59.095 [2024-07-15 19:12:26.310072] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:59.095 request: 00:21:59.095 { 00:21:59.095 "name": "key0", 00:21:59.095 "path": "/tmp/tmp.1te3tGmYL2", 00:21:59.095 "method": "keyring_file_add_key", 00:21:59.095 "req_id": 1 00:21:59.095 } 00:21:59.095 Got JSON-RPC error response 00:21:59.095 response: 00:21:59.095 { 00:21:59.095 "code": -1, 00:21:59.095 "message": "Operation not permitted" 00:21:59.095 } 00:21:59.095 19:12:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:59.095 19:12:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.095 19:12:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.095 19:12:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.095 19:12:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.1te3tGmYL2 00:21:59.095 19:12:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:59.095 19:12:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1te3tGmYL2 00:21:59.354 19:12:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.1te3tGmYL2 00:21:59.354 19:12:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:59.354 19:12:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.354 19:12:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.354 19:12:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.354 19:12:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.354 19:12:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.612 19:12:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:59.612 19:12:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:59.612 19:12:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.612 19:12:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.870 [2024-07-15 19:12:27.062212] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1te3tGmYL2': No such file or directory 00:21:59.870 [2024-07-15 19:12:27.062267] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:59.870 [2024-07-15 19:12:27.062294] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:59.870 [2024-07-15 19:12:27.062303] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:59.870 [2024-07-15 19:12:27.062312] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:59.870 request: 00:21:59.870 { 00:21:59.870 "name": "nvme0", 00:21:59.870 "trtype": "tcp", 00:21:59.870 "traddr": "127.0.0.1", 00:21:59.870 "adrfam": "ipv4", 00:21:59.870 "trsvcid": "4420", 00:21:59.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:59.870 "prchk_reftag": false, 00:21:59.870 "prchk_guard": false, 00:21:59.870 "hdgst": false, 00:21:59.870 "ddgst": false, 00:21:59.870 "psk": "key0", 00:21:59.870 "method": "bdev_nvme_attach_controller", 00:21:59.870 "req_id": 1 00:21:59.870 } 00:21:59.870 Got JSON-RPC error response 00:21:59.870 response: 00:21:59.870 { 00:21:59.870 "code": -19, 00:21:59.870 "message": "No such device" 00:21:59.870 } 00:21:59.870 19:12:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:59.870 19:12:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.870 19:12:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.870 19:12:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.870 19:12:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:59.870 19:12:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:00.131 19:12:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.41fzs17Sxn 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:22:00.131 19:12:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.41fzs17Sxn 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.41fzs17Sxn 00:22:00.131 19:12:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.41fzs17Sxn 00:22:00.131 19:12:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.41fzs17Sxn 00:22:00.131 19:12:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.41fzs17Sxn 00:22:00.389 19:12:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.389 19:12:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.648 nvme0n1 00:22:00.648 19:12:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:22:00.648 19:12:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:00.648 19:12:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:00.648 19:12:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:00.648 19:12:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.648 19:12:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:01.215 19:12:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:22:01.215 19:12:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:22:01.215 19:12:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:01.215 19:12:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:22:01.215 19:12:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:22:01.215 19:12:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:01.215 19:12:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.215 19:12:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:01.780 19:12:28 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:22:01.780 19:12:28 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:22:01.780 19:12:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:01.780 19:12:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:01.780 19:12:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:01.780 19:12:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.780 19:12:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:01.780 19:12:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:01.780 19:12:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:01.780 19:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:02.038 19:12:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:02.038 19:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.038 19:12:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:02.296 19:12:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:02.296 19:12:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.41fzs17Sxn 00:22:02.296 19:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.41fzs17Sxn 00:22:02.554 19:12:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ePrI0OXqez 00:22:02.554 19:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ePrI0OXqez 00:22:02.812 19:12:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:02.812 19:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:03.070 nvme0n1 00:22:03.070 19:12:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:03.070 19:12:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:03.635 19:12:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:03.635 "subsystems": [ 00:22:03.635 { 00:22:03.635 "subsystem": "keyring", 00:22:03.635 "config": [ 00:22:03.635 { 00:22:03.635 "method": "keyring_file_add_key", 00:22:03.635 "params": { 00:22:03.635 "name": "key0", 00:22:03.635 "path": "/tmp/tmp.41fzs17Sxn" 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "keyring_file_add_key", 00:22:03.635 "params": { 00:22:03.635 "name": "key1", 00:22:03.635 "path": "/tmp/tmp.ePrI0OXqez" 00:22:03.635 } 00:22:03.635 } 00:22:03.635 ] 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "subsystem": "iobuf", 00:22:03.635 "config": [ 00:22:03.635 { 00:22:03.635 "method": "iobuf_set_options", 00:22:03.635 "params": { 00:22:03.635 "small_pool_count": 8192, 00:22:03.635 "large_pool_count": 1024, 00:22:03.635 "small_bufsize": 8192, 00:22:03.635 "large_bufsize": 135168 00:22:03.635 } 00:22:03.635 } 00:22:03.635 ] 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "subsystem": "sock", 00:22:03.635 "config": [ 00:22:03.635 { 00:22:03.635 "method": "sock_set_default_impl", 00:22:03.635 "params": { 00:22:03.635 "impl_name": "uring" 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "sock_impl_set_options", 00:22:03.635 "params": { 00:22:03.635 "impl_name": "ssl", 00:22:03.635 "recv_buf_size": 4096, 00:22:03.635 "send_buf_size": 4096, 00:22:03.635 "enable_recv_pipe": true, 00:22:03.635 "enable_quickack": false, 00:22:03.635 "enable_placement_id": 0, 00:22:03.635 "enable_zerocopy_send_server": true, 00:22:03.635 "enable_zerocopy_send_client": false, 00:22:03.635 "zerocopy_threshold": 0, 00:22:03.635 "tls_version": 0, 00:22:03.635 "enable_ktls": false 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "sock_impl_set_options", 00:22:03.635 "params": { 00:22:03.635 "impl_name": "posix", 00:22:03.635 "recv_buf_size": 2097152, 00:22:03.635 "send_buf_size": 2097152, 00:22:03.635 "enable_recv_pipe": true, 00:22:03.635 "enable_quickack": false, 00:22:03.635 "enable_placement_id": 0, 00:22:03.635 "enable_zerocopy_send_server": true, 00:22:03.635 "enable_zerocopy_send_client": false, 00:22:03.635 "zerocopy_threshold": 0, 00:22:03.635 "tls_version": 0, 00:22:03.635 "enable_ktls": false 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "sock_impl_set_options", 00:22:03.635 "params": { 00:22:03.635 "impl_name": "uring", 00:22:03.635 "recv_buf_size": 2097152, 00:22:03.635 "send_buf_size": 2097152, 00:22:03.635 "enable_recv_pipe": true, 00:22:03.635 "enable_quickack": false, 00:22:03.635 "enable_placement_id": 0, 00:22:03.635 "enable_zerocopy_send_server": false, 00:22:03.635 "enable_zerocopy_send_client": false, 00:22:03.635 "zerocopy_threshold": 0, 00:22:03.635 "tls_version": 0, 00:22:03.635 "enable_ktls": false 00:22:03.635 } 00:22:03.635 } 00:22:03.635 ] 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "subsystem": "vmd", 00:22:03.635 "config": [] 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "subsystem": "accel", 00:22:03.635 "config": [ 00:22:03.635 { 00:22:03.635 "method": "accel_set_options", 00:22:03.635 "params": { 00:22:03.635 "small_cache_size": 128, 00:22:03.635 "large_cache_size": 16, 00:22:03.635 "task_count": 2048, 00:22:03.635 "sequence_count": 2048, 00:22:03.635 "buf_count": 2048 00:22:03.635 } 00:22:03.635 } 00:22:03.635 ] 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "subsystem": "bdev", 00:22:03.635 "config": [ 00:22:03.635 { 00:22:03.635 "method": "bdev_set_options", 00:22:03.635 "params": { 00:22:03.635 "bdev_io_pool_size": 65535, 00:22:03.635 "bdev_io_cache_size": 256, 00:22:03.635 "bdev_auto_examine": true, 00:22:03.635 "iobuf_small_cache_size": 128, 00:22:03.635 "iobuf_large_cache_size": 16 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "bdev_raid_set_options", 00:22:03.635 "params": { 00:22:03.635 "process_window_size_kb": 1024 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "bdev_iscsi_set_options", 00:22:03.635 "params": { 00:22:03.635 "timeout_sec": 30 00:22:03.635 } 00:22:03.635 }, 00:22:03.635 { 00:22:03.635 "method": "bdev_nvme_set_options", 00:22:03.635 "params": { 00:22:03.635 "action_on_timeout": "none", 00:22:03.635 "timeout_us": 0, 00:22:03.635 "timeout_admin_us": 0, 00:22:03.635 "keep_alive_timeout_ms": 10000, 00:22:03.635 "arbitration_burst": 0, 00:22:03.635 "low_priority_weight": 0, 00:22:03.635 "medium_priority_weight": 0, 00:22:03.635 "high_priority_weight": 0, 00:22:03.636 "nvme_adminq_poll_period_us": 10000, 00:22:03.636 "nvme_ioq_poll_period_us": 0, 00:22:03.636 "io_queue_requests": 512, 00:22:03.636 "delay_cmd_submit": true, 00:22:03.636 "transport_retry_count": 4, 00:22:03.636 "bdev_retry_count": 3, 00:22:03.636 "transport_ack_timeout": 0, 00:22:03.636 "ctrlr_loss_timeout_sec": 0, 00:22:03.636 "reconnect_delay_sec": 0, 00:22:03.636 "fast_io_fail_timeout_sec": 0, 00:22:03.636 "disable_auto_failback": false, 00:22:03.636 "generate_uuids": false, 00:22:03.636 "transport_tos": 0, 00:22:03.636 "nvme_error_stat": false, 00:22:03.636 "rdma_srq_size": 0, 00:22:03.636 "io_path_stat": false, 00:22:03.636 "allow_accel_sequence": false, 00:22:03.636 "rdma_max_cq_size": 0, 00:22:03.636 "rdma_cm_event_timeout_ms": 0, 00:22:03.636 "dhchap_digests": [ 00:22:03.636 "sha256", 00:22:03.636 "sha384", 00:22:03.636 "sha512" 00:22:03.636 ], 00:22:03.636 "dhchap_dhgroups": [ 00:22:03.636 "null", 00:22:03.636 "ffdhe2048", 00:22:03.636 "ffdhe3072", 00:22:03.636 "ffdhe4096", 00:22:03.636 "ffdhe6144", 00:22:03.636 "ffdhe8192" 00:22:03.636 ] 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_nvme_attach_controller", 00:22:03.636 "params": { 00:22:03.636 "name": "nvme0", 00:22:03.636 "trtype": "TCP", 00:22:03.636 "adrfam": "IPv4", 00:22:03.636 "traddr": "127.0.0.1", 00:22:03.636 "trsvcid": "4420", 00:22:03.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.636 "prchk_reftag": false, 00:22:03.636 "prchk_guard": false, 00:22:03.636 "ctrlr_loss_timeout_sec": 0, 00:22:03.636 "reconnect_delay_sec": 0, 00:22:03.636 "fast_io_fail_timeout_sec": 0, 00:22:03.636 "psk": "key0", 00:22:03.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:03.636 "hdgst": false, 00:22:03.636 "ddgst": false 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_nvme_set_hotplug", 00:22:03.636 "params": { 00:22:03.636 "period_us": 100000, 00:22:03.636 "enable": false 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_wait_for_examine" 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "nbd", 00:22:03.636 "config": [] 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }' 00:22:03.636 19:12:30 keyring_file -- keyring/file.sh@114 -- # killprocess 85317 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85317 ']' 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85317 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85317 00:22:03.636 killing process with pid 85317 00:22:03.636 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.636 00:22:03.636 Latency(us) 00:22:03.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.636 =================================================================================================================== 00:22:03.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85317' 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@967 -- # kill 85317 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@972 -- # wait 85317 00:22:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:03.636 19:12:30 keyring_file -- keyring/file.sh@117 -- # bperfpid=85567 00:22:03.636 19:12:30 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85567 /var/tmp/bperf.sock 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85567 ']' 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.636 19:12:30 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:03.636 19:12:30 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:03.636 "subsystems": [ 00:22:03.636 { 00:22:03.636 "subsystem": "keyring", 00:22:03.636 "config": [ 00:22:03.636 { 00:22:03.636 "method": "keyring_file_add_key", 00:22:03.636 "params": { 00:22:03.636 "name": "key0", 00:22:03.636 "path": "/tmp/tmp.41fzs17Sxn" 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "keyring_file_add_key", 00:22:03.636 "params": { 00:22:03.636 "name": "key1", 00:22:03.636 "path": "/tmp/tmp.ePrI0OXqez" 00:22:03.636 } 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "iobuf", 00:22:03.636 "config": [ 00:22:03.636 { 00:22:03.636 "method": "iobuf_set_options", 00:22:03.636 "params": { 00:22:03.636 "small_pool_count": 8192, 00:22:03.636 "large_pool_count": 1024, 00:22:03.636 "small_bufsize": 8192, 00:22:03.636 "large_bufsize": 135168 00:22:03.636 } 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "sock", 00:22:03.636 "config": [ 00:22:03.636 { 00:22:03.636 "method": "sock_set_default_impl", 00:22:03.636 "params": { 00:22:03.636 "impl_name": "uring" 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "sock_impl_set_options", 00:22:03.636 "params": { 00:22:03.636 "impl_name": "ssl", 00:22:03.636 "recv_buf_size": 4096, 00:22:03.636 "send_buf_size": 4096, 00:22:03.636 "enable_recv_pipe": true, 00:22:03.636 "enable_quickack": false, 00:22:03.636 "enable_placement_id": 0, 00:22:03.636 "enable_zerocopy_send_server": true, 00:22:03.636 "enable_zerocopy_send_client": false, 00:22:03.636 "zerocopy_threshold": 0, 00:22:03.636 "tls_version": 0, 00:22:03.636 "enable_ktls": false 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "sock_impl_set_options", 00:22:03.636 "params": { 00:22:03.636 "impl_name": "posix", 00:22:03.636 "recv_buf_size": 2097152, 00:22:03.636 "send_buf_size": 2097152, 00:22:03.636 "enable_recv_pipe": true, 00:22:03.636 "enable_quickack": false, 00:22:03.636 "enable_placement_id": 0, 00:22:03.636 "enable_zerocopy_send_server": true, 00:22:03.636 "enable_zerocopy_send_client": false, 00:22:03.636 "zerocopy_threshold": 0, 00:22:03.636 "tls_version": 0, 00:22:03.636 "enable_ktls": false 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "sock_impl_set_options", 00:22:03.636 "params": { 00:22:03.636 "impl_name": "uring", 00:22:03.636 "recv_buf_size": 2097152, 00:22:03.636 "send_buf_size": 2097152, 00:22:03.636 "enable_recv_pipe": true, 00:22:03.636 "enable_quickack": false, 00:22:03.636 "enable_placement_id": 0, 00:22:03.636 "enable_zerocopy_send_server": false, 00:22:03.636 "enable_zerocopy_send_client": false, 00:22:03.636 "zerocopy_threshold": 0, 00:22:03.636 "tls_version": 0, 00:22:03.636 "enable_ktls": false 00:22:03.636 } 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "vmd", 00:22:03.636 "config": [] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "accel", 00:22:03.636 "config": [ 00:22:03.636 { 00:22:03.636 "method": "accel_set_options", 00:22:03.636 "params": { 00:22:03.636 "small_cache_size": 128, 00:22:03.636 "large_cache_size": 16, 00:22:03.636 "task_count": 2048, 00:22:03.636 "sequence_count": 2048, 00:22:03.636 "buf_count": 2048 00:22:03.636 } 00:22:03.636 } 00:22:03.636 ] 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "subsystem": "bdev", 00:22:03.636 "config": [ 00:22:03.636 { 00:22:03.636 "method": "bdev_set_options", 00:22:03.636 "params": { 00:22:03.636 "bdev_io_pool_size": 65535, 00:22:03.636 "bdev_io_cache_size": 256, 00:22:03.636 "bdev_auto_examine": true, 00:22:03.636 "iobuf_small_cache_size": 128, 00:22:03.636 "iobuf_large_cache_size": 16 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_raid_set_options", 00:22:03.636 "params": { 00:22:03.636 "process_window_size_kb": 1024 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_iscsi_set_options", 00:22:03.636 "params": { 00:22:03.636 "timeout_sec": 30 00:22:03.636 } 00:22:03.636 }, 00:22:03.636 { 00:22:03.636 "method": "bdev_nvme_set_options", 00:22:03.636 "params": { 00:22:03.636 "action_on_timeout": "none", 00:22:03.637 "timeout_us": 0, 00:22:03.637 "timeout_admin_us": 0, 00:22:03.637 "keep_alive_timeout_ms": 10000, 00:22:03.637 "arbitration_burst": 0, 00:22:03.637 "low_priority_weight": 0, 00:22:03.637 "medium_priority_weight": 0, 00:22:03.637 "high_priority_weight": 0, 00:22:03.637 "nvme_adminq_poll_period_us": 10000, 00:22:03.637 "nvme_ioq_poll_period_us": 0, 00:22:03.637 "io_queue_requests": 512, 00:22:03.637 "delay_cm 19:12:30 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:03.637 d_submit": true, 00:22:03.637 "transport_retry_count": 4, 00:22:03.637 "bdev_retry_count": 3, 00:22:03.637 "transport_ack_timeout": 0, 00:22:03.637 "ctrlr_loss_timeout_sec": 0, 00:22:03.637 "reconnect_delay_sec": 0, 00:22:03.637 "fast_io_fail_timeout_sec": 0, 00:22:03.637 "disable_auto_failback": false, 00:22:03.637 "generate_uuids": false, 00:22:03.637 "transport_tos": 0, 00:22:03.637 "nvme_error_stat": false, 00:22:03.637 "rdma_srq_size": 0, 00:22:03.637 "io_path_stat": false, 00:22:03.637 "allow_accel_sequence": false, 00:22:03.637 "rdma_max_cq_size": 0, 00:22:03.637 "rdma_cm_event_timeout_ms": 0, 00:22:03.637 "dhchap_digests": [ 00:22:03.637 "sha256", 00:22:03.637 "sha384", 00:22:03.637 "sha512" 00:22:03.637 ], 00:22:03.637 "dhchap_dhgroups": [ 00:22:03.637 "null", 00:22:03.637 "ffdhe2048", 00:22:03.637 "ffdhe3072", 00:22:03.637 "ffdhe4096", 00:22:03.637 "ffdhe6144", 00:22:03.637 "ffdhe8192" 00:22:03.637 ] 00:22:03.637 } 00:22:03.637 }, 00:22:03.637 { 00:22:03.637 "method": "bdev_nvme_attach_controller", 00:22:03.637 "params": { 00:22:03.637 "name": "nvme0", 00:22:03.637 "trtype": "TCP", 00:22:03.637 "adrfam": "IPv4", 00:22:03.637 "traddr": "127.0.0.1", 00:22:03.637 "trsvcid": "4420", 00:22:03.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.637 "prchk_reftag": false, 00:22:03.637 "prchk_guard": false, 00:22:03.637 "ctrlr_loss_timeout_sec": 0, 00:22:03.637 "reconnect_delay_sec": 0, 00:22:03.637 "fast_io_fail_timeout_sec": 0, 00:22:03.637 "psk": "key0", 00:22:03.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:03.637 "hdgst": false, 00:22:03.637 "ddgst": false 00:22:03.637 } 00:22:03.637 }, 00:22:03.637 { 00:22:03.637 "method": "bdev_nvme_set_hotplug", 00:22:03.637 "params": { 00:22:03.637 "period_us": 100000, 00:22:03.637 "enable": false 00:22:03.637 } 00:22:03.637 }, 00:22:03.637 { 00:22:03.637 "method": "bdev_wait_for_examine" 00:22:03.637 } 00:22:03.637 ] 00:22:03.637 }, 00:22:03.637 { 00:22:03.637 "subsystem": "nbd", 00:22:03.637 "config": [] 00:22:03.637 } 00:22:03.637 ] 00:22:03.637 }' 00:22:03.637 19:12:30 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.637 19:12:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:03.637 [2024-07-15 19:12:30.921407] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:22:03.637 [2024-07-15 19:12:30.921699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85567 ] 00:22:03.896 [2024-07-15 19:12:31.054928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.896 [2024-07-15 19:12:31.156882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.153 [2024-07-15 19:12:31.290934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:04.153 [2024-07-15 19:12:31.343972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.716 19:12:31 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.716 19:12:31 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:04.716 19:12:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:04.716 19:12:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:04.716 19:12:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:04.974 19:12:32 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:04.974 19:12:32 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:04.974 19:12:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:04.974 19:12:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:04.974 19:12:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:04.974 19:12:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:04.974 19:12:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.235 19:12:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:05.235 19:12:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:05.235 19:12:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:05.235 19:12:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:05.235 19:12:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:05.235 19:12:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.235 19:12:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:05.493 19:12:32 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:05.493 19:12:32 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:05.493 19:12:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:05.493 19:12:32 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:05.751 19:12:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:05.751 19:12:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:05.751 19:12:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.41fzs17Sxn /tmp/tmp.ePrI0OXqez 00:22:05.751 19:12:32 keyring_file -- keyring/file.sh@20 -- # killprocess 85567 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85567 ']' 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85567 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85567 00:22:05.751 killing process with pid 85567 00:22:05.751 Received shutdown signal, test time was about 1.000000 seconds 00:22:05.751 00:22:05.751 Latency(us) 00:22:05.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.751 =================================================================================================================== 00:22:05.751 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85567' 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@967 -- # kill 85567 00:22:05.751 19:12:32 keyring_file -- common/autotest_common.sh@972 -- # wait 85567 00:22:06.009 19:12:33 keyring_file -- keyring/file.sh@21 -- # killprocess 85300 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85300 ']' 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85300 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85300 00:22:06.009 killing process with pid 85300 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85300' 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@967 -- # kill 85300 00:22:06.009 [2024-07-15 19:12:33.177304] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:06.009 19:12:33 keyring_file -- common/autotest_common.sh@972 -- # wait 85300 00:22:06.575 00:22:06.575 real 0m15.967s 00:22:06.575 user 0m39.769s 00:22:06.575 sys 0m3.074s 00:22:06.575 19:12:33 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.575 ************************************ 00:22:06.575 END TEST keyring_file 00:22:06.575 ************************************ 00:22:06.575 19:12:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:06.575 19:12:33 -- common/autotest_common.sh@1142 -- # return 0 00:22:06.575 19:12:33 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:06.575 19:12:33 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:06.575 19:12:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:06.575 19:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.575 19:12:33 -- common/autotest_common.sh@10 -- # set +x 00:22:06.575 ************************************ 00:22:06.575 START TEST keyring_linux 00:22:06.575 ************************************ 00:22:06.575 19:12:33 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:06.575 * Looking for test storage... 00:22:06.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:06.575 19:12:33 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:06.575 19:12:33 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1bdc3113-659b-4df6-a9cf-a9738596adff 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=1bdc3113-659b-4df6-a9cf-a9738596adff 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:06.575 19:12:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.575 19:12:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.575 19:12:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.575 19:12:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.575 19:12:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.575 19:12:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.575 19:12:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:06.575 19:12:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.575 19:12:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:06.576 /tmp/:spdk-test:key0 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:06.576 19:12:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:06.576 /tmp/:spdk-test:key1 00:22:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.576 19:12:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85680 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.576 19:12:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85680 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85680 ']' 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.576 19:12:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:06.835 [2024-07-15 19:12:33.876744] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:22:06.835 [2024-07-15 19:12:33.876847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85680 ] 00:22:06.835 [2024-07-15 19:12:34.015633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.835 [2024-07-15 19:12:34.118674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.094 [2024-07-15 19:12:34.172794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:07.661 [2024-07-15 19:12:34.831325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.661 null0 00:22:07.661 [2024-07-15 19:12:34.863224] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.661 [2024-07-15 19:12:34.863448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:07.661 1051979210 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:07.661 340135340 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85698 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:07.661 19:12:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85698 /var/tmp/bperf.sock 00:22:07.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85698 ']' 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.661 19:12:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:07.661 [2024-07-15 19:12:34.943016] Starting SPDK v24.09-pre git sha1 cdc37ee83 / DPDK 24.03.0 initialization... 00:22:07.661 [2024-07-15 19:12:34.943293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85698 ] 00:22:07.919 [2024-07-15 19:12:35.082525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.919 [2024-07-15 19:12:35.188531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.872 19:12:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.872 19:12:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:08.872 19:12:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:08.872 19:12:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:09.132 19:12:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:09.132 19:12:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:09.390 [2024-07-15 19:12:36.527374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:09.390 19:12:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:09.390 19:12:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:09.649 [2024-07-15 19:12:36.886350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.908 nvme0n1 00:22:09.908 19:12:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:09.908 19:12:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:09.908 19:12:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:09.908 19:12:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:09.908 19:12:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:09.908 19:12:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.166 19:12:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:10.166 19:12:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:10.166 19:12:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:10.166 19:12:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:10.166 19:12:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.166 19:12:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.166 19:12:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@25 -- # sn=1051979210 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 1051979210 == \1\0\5\1\9\7\9\2\1\0 ]] 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1051979210 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:10.424 19:12:37 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:10.424 Running I/O for 1 seconds... 00:22:11.796 00:22:11.796 Latency(us) 00:22:11.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:11.796 nvme0n1 : 1.01 13251.83 51.76 0.00 0.00 9602.44 7208.96 16920.20 00:22:11.796 =================================================================================================================== 00:22:11.796 Total : 13251.83 51.76 0.00 0.00 9602.44 7208.96 16920.20 00:22:11.796 0 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:11.796 19:12:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:11.796 19:12:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:11.796 19:12:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.054 19:12:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:12.054 19:12:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:12.054 19:12:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:12.054 19:12:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.054 19:12:39 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:12.054 19:12:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:12.313 [2024-07-15 19:12:39.536615] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:12.313 [2024-07-15 19:12:39.537280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448e50 (107): Transport endpoint is not connected 00:22:12.313 [2024-07-15 19:12:39.538270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448e50 (9): Bad file descriptor 00:22:12.313 [2024-07-15 19:12:39.539266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:12.313 [2024-07-15 19:12:39.539290] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:12.313 [2024-07-15 19:12:39.539302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:12.313 request: 00:22:12.313 { 00:22:12.313 "name": "nvme0", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "127.0.0.1", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:12.313 "prchk_reftag": false, 00:22:12.313 "prchk_guard": false, 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false, 00:22:12.313 "psk": ":spdk-test:key1", 00:22:12.313 "method": "bdev_nvme_attach_controller", 00:22:12.313 "req_id": 1 00:22:12.313 } 00:22:12.313 Got JSON-RPC error response 00:22:12.313 response: 00:22:12.313 { 00:22:12.313 "code": -5, 00:22:12.313 "message": "Input/output error" 00:22:12.313 } 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@33 -- # sn=1051979210 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1051979210 00:22:12.313 1 links removed 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@33 -- # sn=340135340 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 340135340 00:22:12.313 1 links removed 00:22:12.313 19:12:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85698 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85698 ']' 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85698 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85698 00:22:12.313 killing process with pid 85698 00:22:12.313 Received shutdown signal, test time was about 1.000000 seconds 00:22:12.313 00:22:12.313 Latency(us) 00:22:12.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.313 =================================================================================================================== 00:22:12.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85698' 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 85698 00:22:12.313 19:12:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 85698 00:22:12.576 19:12:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85680 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85680 ']' 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85680 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85680 00:22:12.576 killing process with pid 85680 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85680' 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 85680 00:22:12.576 19:12:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 85680 00:22:13.143 ************************************ 00:22:13.143 END TEST keyring_linux 00:22:13.143 ************************************ 00:22:13.143 00:22:13.143 real 0m6.621s 00:22:13.143 user 0m13.132s 00:22:13.143 sys 0m1.598s 00:22:13.143 19:12:40 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.143 19:12:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:13.143 19:12:40 -- common/autotest_common.sh@1142 -- # return 0 00:22:13.143 19:12:40 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:13.143 19:12:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:13.143 19:12:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:13.143 19:12:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:13.143 19:12:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:13.143 19:12:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:13.143 19:12:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:13.143 19:12:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.143 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:22:13.143 19:12:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:13.143 19:12:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:13.143 19:12:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:13.143 19:12:40 -- common/autotest_common.sh@10 -- # set +x 00:22:15.047 INFO: APP EXITING 00:22:15.047 INFO: killing all VMs 00:22:15.047 INFO: killing vhost app 00:22:15.047 INFO: EXIT DONE 00:22:15.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:15.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:15.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:16.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.239 Cleaning 00:22:16.239 Removing: /var/run/dpdk/spdk0/config 00:22:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:16.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:16.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:16.240 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:16.240 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:16.240 Removing: /var/run/dpdk/spdk1/config 00:22:16.240 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:16.240 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:16.240 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:16.240 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:16.240 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:16.240 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:16.240 Removing: /var/run/dpdk/spdk2/config 00:22:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:16.240 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:16.240 Removing: /var/run/dpdk/spdk3/config 00:22:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:16.240 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:16.240 Removing: /var/run/dpdk/spdk4/config 00:22:16.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:16.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:16.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:16.240 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:16.240 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:16.240 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:16.240 Removing: /dev/shm/nvmf_trace.0 00:22:16.240 Removing: /dev/shm/spdk_tgt_trace.pid58731 00:22:16.240 Removing: /var/run/dpdk/spdk0 00:22:16.240 Removing: /var/run/dpdk/spdk1 00:22:16.240 Removing: /var/run/dpdk/spdk2 00:22:16.240 Removing: /var/run/dpdk/spdk3 00:22:16.240 Removing: /var/run/dpdk/spdk4 00:22:16.240 Removing: /var/run/dpdk/spdk_pid58585 00:22:16.240 Removing: /var/run/dpdk/spdk_pid58731 00:22:16.240 Removing: /var/run/dpdk/spdk_pid58935 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59016 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59049 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59153 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59171 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59289 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59485 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59626 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59690 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59766 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59857 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59934 00:22:16.240 Removing: /var/run/dpdk/spdk_pid59973 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60008 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60070 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60169 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60613 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60665 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60711 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60727 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60800 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60816 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60883 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60899 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60950 00:22:16.240 Removing: /var/run/dpdk/spdk_pid60968 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61008 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61026 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61154 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61190 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61263 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61316 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61339 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61399 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61433 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61468 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61508 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61537 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61577 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61606 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61646 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61675 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61715 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61744 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61784 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61813 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61855 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61884 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61924 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61953 00:22:16.240 Removing: /var/run/dpdk/spdk_pid61995 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62033 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62066 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62106 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62170 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62258 00:22:16.240 Removing: /var/run/dpdk/spdk_pid62566 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62583 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62620 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62633 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62649 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62679 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62687 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62708 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62727 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62747 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62768 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62787 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62806 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62816 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62841 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62854 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62875 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62894 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62908 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62923 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62959 00:22:16.498 Removing: /var/run/dpdk/spdk_pid62973 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63008 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63066 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63100 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63110 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63138 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63152 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63161 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63203 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63217 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63251 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63260 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63270 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63279 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63289 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63304 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63308 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63323 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63351 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63378 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63393 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63416 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63431 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63444 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63479 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63496 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63528 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63530 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63543 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63551 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63558 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63567 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63574 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63587 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63656 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63708 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63814 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63851 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63892 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63912 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63934 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63954 00:22:16.498 Removing: /var/run/dpdk/spdk_pid63985 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64001 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64071 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64098 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64143 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64205 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64267 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64291 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64382 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64430 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64463 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64687 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64779 00:22:16.498 Removing: /var/run/dpdk/spdk_pid64813 00:22:16.498 Removing: /var/run/dpdk/spdk_pid65124 00:22:16.498 Removing: /var/run/dpdk/spdk_pid65162 00:22:16.498 Removing: /var/run/dpdk/spdk_pid65456 00:22:16.498 Removing: /var/run/dpdk/spdk_pid65867 00:22:16.498 Removing: /var/run/dpdk/spdk_pid66142 00:22:16.498 Removing: /var/run/dpdk/spdk_pid66921 00:22:16.498 Removing: /var/run/dpdk/spdk_pid67740 00:22:16.498 Removing: /var/run/dpdk/spdk_pid67857 00:22:16.498 Removing: /var/run/dpdk/spdk_pid67924 00:22:16.498 Removing: /var/run/dpdk/spdk_pid69190 00:22:16.498 Removing: /var/run/dpdk/spdk_pid69396 00:22:16.756 Removing: /var/run/dpdk/spdk_pid72767 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73068 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73176 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73311 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73333 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73362 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73389 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73487 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73616 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73766 00:22:16.756 Removing: /var/run/dpdk/spdk_pid73846 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74037 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74119 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74217 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74514 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74897 00:22:16.756 Removing: /var/run/dpdk/spdk_pid74899 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75176 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75190 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75204 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75239 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75245 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75544 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75588 00:22:16.756 Removing: /var/run/dpdk/spdk_pid75873 00:22:16.756 Removing: /var/run/dpdk/spdk_pid76069 00:22:16.756 Removing: /var/run/dpdk/spdk_pid76439 00:22:16.756 Removing: /var/run/dpdk/spdk_pid76943 00:22:16.756 Removing: /var/run/dpdk/spdk_pid77770 00:22:16.756 Removing: /var/run/dpdk/spdk_pid78358 00:22:16.756 Removing: /var/run/dpdk/spdk_pid78361 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80262 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80321 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80377 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80443 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80558 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80619 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80679 00:22:16.756 Removing: /var/run/dpdk/spdk_pid80735 00:22:16.756 Removing: /var/run/dpdk/spdk_pid81057 00:22:16.756 Removing: /var/run/dpdk/spdk_pid82214 00:22:16.756 Removing: /var/run/dpdk/spdk_pid82354 00:22:16.756 Removing: /var/run/dpdk/spdk_pid82598 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83147 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83305 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83462 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83559 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83718 00:22:16.756 Removing: /var/run/dpdk/spdk_pid83827 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84480 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84521 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84551 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84805 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84842 00:22:16.756 Removing: /var/run/dpdk/spdk_pid84872 00:22:16.756 Removing: /var/run/dpdk/spdk_pid85300 00:22:16.756 Removing: /var/run/dpdk/spdk_pid85317 00:22:16.756 Removing: /var/run/dpdk/spdk_pid85567 00:22:16.756 Removing: /var/run/dpdk/spdk_pid85680 00:22:16.756 Removing: /var/run/dpdk/spdk_pid85698 00:22:16.756 Clean 00:22:16.756 19:12:44 -- common/autotest_common.sh@1451 -- # return 0 00:22:16.756 19:12:44 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:16.756 19:12:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.756 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:22:17.016 19:12:44 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:17.016 19:12:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.016 19:12:44 -- common/autotest_common.sh@10 -- # set +x 00:22:17.016 19:12:44 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:17.016 19:12:44 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:17.016 19:12:44 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:17.016 19:12:44 -- spdk/autotest.sh@391 -- # hash lcov 00:22:17.016 19:12:44 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:17.016 19:12:44 -- spdk/autotest.sh@393 -- # hostname 00:22:17.016 19:12:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:17.286 geninfo: WARNING: invalid characters removed from testname! 00:22:43.824 19:13:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.127 19:13:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.678 19:13:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:52.255 19:13:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:54.788 19:13:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:57.319 19:13:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:59.852 19:13:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:00.111 19:13:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.111 19:13:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:00.111 19:13:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.111 19:13:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.111 19:13:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.111 19:13:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.111 19:13:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.111 19:13:27 -- paths/export.sh@5 -- $ export PATH 00:23:00.111 19:13:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.111 19:13:27 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:00.111 19:13:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:23:00.111 19:13:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721070807.XXXXXX 00:23:00.111 19:13:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721070807.Yqb3uB 00:23:00.111 19:13:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:23:00.111 19:13:27 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:23:00.111 19:13:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:00.111 19:13:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:00.111 19:13:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:00.111 19:13:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:23:00.111 19:13:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:23:00.111 19:13:27 -- common/autotest_common.sh@10 -- $ set +x 00:23:00.111 19:13:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:00.111 19:13:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:23:00.111 19:13:27 -- pm/common@17 -- $ local monitor 00:23:00.111 19:13:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:00.111 19:13:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:00.111 19:13:27 -- pm/common@21 -- $ date +%s 00:23:00.111 19:13:27 -- pm/common@25 -- $ sleep 1 00:23:00.111 19:13:27 -- pm/common@21 -- $ date +%s 00:23:00.111 19:13:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721070807 00:23:00.111 19:13:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721070807 00:23:00.111 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721070807_collect-vmstat.pm.log 00:23:00.111 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721070807_collect-cpu-load.pm.log 00:23:01.070 19:13:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:23:01.070 19:13:28 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:01.070 19:13:28 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:01.070 19:13:28 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:01.070 19:13:28 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:01.070 19:13:28 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:01.070 19:13:28 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:01.070 19:13:28 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:01.070 19:13:28 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:01.070 19:13:28 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:01.070 19:13:28 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:01.070 19:13:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:01.070 19:13:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:01.070 19:13:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:01.070 19:13:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:01.070 19:13:28 -- pm/common@44 -- $ pid=87420 00:23:01.070 19:13:28 -- pm/common@50 -- $ kill -TERM 87420 00:23:01.070 19:13:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:01.070 19:13:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:01.070 19:13:28 -- pm/common@44 -- $ pid=87422 00:23:01.070 19:13:28 -- pm/common@50 -- $ kill -TERM 87422 00:23:01.070 + [[ -n 5106 ]] 00:23:01.070 + sudo kill 5106 00:23:01.081 [Pipeline] } 00:23:01.100 [Pipeline] // timeout 00:23:01.105 [Pipeline] } 00:23:01.124 [Pipeline] // stage 00:23:01.130 [Pipeline] } 00:23:01.151 [Pipeline] // catchError 00:23:01.161 [Pipeline] stage 00:23:01.164 [Pipeline] { (Stop VM) 00:23:01.179 [Pipeline] sh 00:23:01.459 + vagrant halt 00:23:04.746 ==> default: Halting domain... 00:23:11.315 [Pipeline] sh 00:23:11.588 + vagrant destroy -f 00:23:15.769 ==> default: Removing domain... 00:23:15.784 [Pipeline] sh 00:23:16.064 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:16.074 [Pipeline] } 00:23:16.094 [Pipeline] // stage 00:23:16.099 [Pipeline] } 00:23:16.116 [Pipeline] // dir 00:23:16.130 [Pipeline] } 00:23:16.177 [Pipeline] // wrap 00:23:16.182 [Pipeline] } 00:23:16.192 [Pipeline] // catchError 00:23:16.200 [Pipeline] stage 00:23:16.201 [Pipeline] { (Epilogue) 00:23:16.210 [Pipeline] sh 00:23:16.480 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:23.074 [Pipeline] catchError 00:23:23.075 [Pipeline] { 00:23:23.086 [Pipeline] sh 00:23:23.358 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:23.358 Artifacts sizes are good 00:23:23.366 [Pipeline] } 00:23:23.383 [Pipeline] // catchError 00:23:23.394 [Pipeline] archiveArtifacts 00:23:23.400 Archiving artifacts 00:23:23.544 [Pipeline] cleanWs 00:23:23.555 [WS-CLEANUP] Deleting project workspace... 00:23:23.555 [WS-CLEANUP] Deferred wipeout is used... 00:23:23.560 [WS-CLEANUP] done 00:23:23.562 [Pipeline] } 00:23:23.579 [Pipeline] // stage 00:23:23.585 [Pipeline] } 00:23:23.601 [Pipeline] // node 00:23:23.607 [Pipeline] End of Pipeline 00:23:23.641 Finished: SUCCESS